meta-learning:towards universal learning paradigms włodzisław duch & co department of...

75
Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google: W. Duch 6th Int. Conf. on Advanced Computational Intelligence (ICACI2013), 10/2013

Upload: aubrey-stephens

Post on 13-Jan-2016

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Meta-Learning:Towards Universal Learning Paradigms

Meta-Learning:Towards Universal Learning Paradigms

Włodzisław Duch & Co

Department of Informatics, Nicolaus Copernicus University, Toruń, Poland

Google: W. Duch

6th Int. Conf. on Advanced Computational Intelligence (ICACI2013), 10/2013

Page 2: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Toruń

Page 3: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Copernicus

Nicolaus Copernicus: born in Torun in 1472

Page 4: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Bio Nano

Info

Cognitive

Neuro

Page 5: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Interdisciplinary Center of Innovative Technologies

Why am I interested in this? Bio + Neuro + Cog Sci = Neurocognitive Informatics

Neurocognitive lab,5 rooms, many projects requiring experimental work. Funding: national/EU grants. Pushing the limits of brain plasticity and understanding brain-mind relations, with a lot of help from computational intelligence!

Page 6: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Our toys

Page 7: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

DI NCU Projects:NCI

Neurocognitive Informatics: from complex cognition => algorithms.

• Computational creativity, insight, intuition, imagery.• Imagery agnosia, especially imagery amusia.• Neurocognitive approach to language, word games. • Medical information retrieval, analysis, visualization. • Comprehensive theory of autism, ADHD, phenomics. • Visualization of high-D trajectories, EEG signals, neurofeedback.• Brain stem models & consciousness in artificial systems.• Geometric theory of brain-mind processes. • Infants: observation, guided development. • Neural determinism, free will & social consequences.

Page 8: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

DI NCU Projects: CI

Google Duch W => List of projects, talks, papers

Computational intelligence (CI), main themes: • Foundations of computational intelligence: transformation

based learning, k-separability, learning hard boole’an problems.• Novel learning: projection pursuit networks, QPC (Quality of

Projected Clusters), search-based neural training, transfer learning or learning from others (ULM), aRPM, SFM ...

• Understanding of data: prototype-based rules, visualization. • Similarity based framework for metalearning, heterogeneous

systems, new transfer functions for neural networks.• Feature selection, extraction, creation of enhanced spaces.• General meta-learning, or learning how to learn, deep learning.

Page 9: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:
Page 10: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Norbert Tomek Marek Krzysztof

Page 11: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

PlanPlan

• Problems with Computational intelligence

• Problems with current approaches to data mining/pattern recognition.

• Meta-learning as search in the space of all models.• Our initial attempts: similarity based framework for

metalearning and heterogeneous systems. • Hard problems: support features, k-separability and the goal of

learning.• Transfer learning and more components to build algorithms:

SFM, aRMP, LOK, ULM, QPC-PP, QPC-NN, C3S, cLVQ.• Implementation of meta-learning - algorithms on demand.• Project page: http://www.is.umk.pl/projects/meta.html

Page 12: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

What is there to learn?What is there to learn?Brains ... what is in EEG? What happens in the brain?

Industry: what happens with our machines? Cognitive robotics: vision, perception, language. Bioinformatics, life sciences.

Page 13: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Infant’s behavior

cameras microphones, speakers

Motion & pressure detectors

Understand baby’s reactions, what is perceived by developing brain, how to monitor and correct it?

Page 14: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

What can we learn?What can we learn?What can we learn using pattern recognition, machine learning, computational intelligence techniques? Everything?

Neural networks are universal approximators and evolutionary algorithms solve global optimization problems – so everything can be learned? Not at all! All non-trivial problems are hard, need deep transformations.

Duda, Hart & Stork, Ch. 9, No Free Lunch + Ugly Duckling Theorems: • Uniformly averaged over all target functions the expected error for all

learning algorithms [predictions by economists] is the same. • Averaged over all target functions no learning algorithm yields

generalization error that is superior to any other. • There is no problem-independent or “best” set of features.“Experience with a broad range of techniques is the best insurance for solving arbitrary new classification problems.”In practice: try as many models as you can, rely on your experience and intuition. There is no free lunch, but do we have to cook ourselves?

Page 15: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Data mining packagesData mining packages

• No free lunch => provide different type of tools for knowledge discovery: decision tree, neural, neurofuzzy, similarity-based, SVM, committees, tools for visualization of data.

• Support the process of knowledge discovery/model building and evaluating, organizing it into projects.

• Many other interesting DM packages of this sort exists: Weka, Yale, Orange, Knime ... >170 packages on the-data-mine.com list!

• We are building Intemi, radically new DM tools.

GhostMiner, data mining tools from our lab + Fujitsu: http://www.fqspl.com.pl/ghostminer/

• Separate the process of model building (hackers) and knowledge discovery, from model use (lamers) => GM Developer & Analyzer

Page 16: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

What DM packages do?What DM packages do?

Hundreds of components ... transforming, visualizing ...

Rapid Miner 5.2, type and # components: total 712 (March 2012)Process control 38Data transformations 114Data modeling 263Performance evaluation 31Other packages 266 Text, series, web ... specific transformations, visualization, presentation, plugin extensions ... ~ billions of models in most large DM packages!

Visual “knowledge flow” to link components, or script languages (XML) to define complex experiments.

Page 17: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

With all these tools, are we

really so good?

Surprise!

Almost nothing can be learned using such tools!

Page 18: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

May the force be with youMay the force be with you

Hundreds of components ... billions of combinations ...Our treasure box is full! We can publish forever! Specialized transformations are still missing in many packages.Data miners have a hard job … what to select?

What would we really like to have?Press the button, and wait for the truth!

Computer power is with us, meta-learning should replace data miners in find all interesting data models = sequences of transformations/procedures.

Many considerations: optimal cost solutions, various costs of using feature subsets; simple & easy to understand vs. optimal accuracy; various representation of knowledge: crisp, fuzzy or prototype rules, visualization, confidence in predictions ...

Page 19: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Meta-learningMeta-learning

Meta-learning means different things for different people.Some will call “meta” learning of many models, ranking them, boosting, bagging, or creating an ensemble in many ways , so heremeta optimization of parameters to integrate models.Landmarking: characterize many datasets and remember which method worked the best on each dataset. Compare new dataset to the reference ones; define various measures (not easy) and use similarity-based methods. Regression models: created for each algorithm on parameters that describe data to predict expected accuracy, ranking potentially useful algorithms.Stacking, ensambles: learn new models on errors of the previous ones.Deep learning: DARPA 2009 call, methods are „flat”, shallow, build a universal machine learning engine that generates progressively more sophisticated representations of patterns, invariants, correlations from data. Success in limited domains only …Meta-learning: learning how to learn.

Page 20: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Principles: information compressionPrinciples: information compression

Neural information processing in perception and cognition: information compression, or algorithmic complexity. In computing: minimum length (message, description) encoding.

Wolff (2006): all cognition and computation is information compression! Analysis and production of natural language, fuzzy pattern recognition, probabilistic reasoning and unsupervised inductive learning.Talks about multiple alignment, unification and search, but so far only models for sequential data and 1D alignment have been demonstrated. Information compression: encoding new information in terms of oldhas been used to define the measure of syntactic and semantic information (Duch, Jankowski 1994); based on the size of the minimal graph representing a given data structure or knowledge-base specification, thus it goes beyond alignment; real info = what model cannot predict. “Surprise” and curiosity measures: Pfaffelhuber (1972), Palm, Schmidhuber, Baldi … all based on the same idea.

Page 21: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Similarity-based frameworkSimilarity-based frameworkSearch for good models requires a frameworks to build and evaluate them. p(Ci|X;M) posterior classification probability or y(X;M) approximators,models M are parameterized in increasingly sophisticated way. Similarity-Based Learning (SBL) or S-B Methods provide such framework.

(Dis)similarity: • more general than feature-based description, • no need for vector spaces (structured objects), • more general than fuzzy approach (F-rules are reduced to P-rules), • includes nearest neighbor algorithms, MLPs, RBFs, separable function

networks, SVMs, kernel methods, specialized kernels, and many others!

A systematic search (greedy, beam), or evolutionary search in the space of all SBL models is used to select optimal combination of parameters & procedures, opening different types of optimization channels, trying to discover appropriate bias for a given problem.

Result: several candidate models are created, already first very limited version gave best results in 7 out of 12 Stalog problems.

Page 22: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

SBM framework componentsSBM framework components• Pre-processing: objects O => features X, or (diss)similarities D(O,O’). • Calculation of similarity between features d(xi,yi) and objects D(X,Y).• Reference (or prototype) vector R selection/creation/optimization. • Weighted influence of reference vectors G(D(Ri,X)), i=1..k.• Functions/procedures to estimate p(C|X;M) or approximator y(X;M). • Cost functions E[DT;M], various model selection/validation procedures. • Optimization procedures for the whole model Ma.• Search control procedures to create more complex models Ma+1.• Creation of ensembles of (global, local, competent) models.

• M={X(O), d(.,.), D(.,.), k, G(D), {R}, {pi(R)}, E[.], K(.), S(.,.)}, where:• S(Ci,Cj) is a matrix evaluating similarity of the classes;

a vector of observed probabilities pi(X) instead of hard labels.

The kNN model p(Ci|X;kNN) = p(Ci|X;k,D(.),{DT}); the RBF model: p(Ci|X;RBF) = p(Ci|X;D(.),G(D),{R}), MLP, SVM and many other models may all be “re-discovered” as a part of SBL.

Page 23: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Meta-learning in SBL schemeMeta-learning in SBL scheme

Start from kNN, k=1, all data & features, Euclidean distance, end with a new model based on novel combination of procedures and parameterizations.

k-NN 67.5/76.6%

+d(x,y); Canberra 89.9/90.7 %

+ si=(0,0,1,0,1,1); 71.6/64.4 %

+selection, 67.5/76.6 %

+k opt; 67.5/76.6 %

+d(x,y) + si=(1,0,1,0.6,0.9,1); Canberra 74.6/72.9 %

+d(x,y) + selection; Canberra 89.9/90.7 %

k-NN 67.5/76.6%

+d(x,y); Canberra 89.9/90.7 %

+ si=(0,0,1,0,1,1); 71.6/64.4 %

+selection, 67.5/76.6 %

+k opt; 67.5/76.6 %

+d(x,y) + si=(1,0,1,0.6,0.9,1); Canberra 74.6/72.9 %

+d(x,y) + sel. or opt k; Canberra 89.9/90.7 %

k-NN 67.5/76.6%

+d(x,y); Canberra 89.9/90.7 %

+ si=(0,0,1,0,1,1); 71.6/64.4 %

+ranking, 67.5/76.6 %

+k opt; 67.5/76.6 %

+d(x,y) + si=(1,0,1,0.6,0.9,1); Canberra 74.6/72.9 %

+d(x,y) + selection; Canberra 89.9/90.7 %

Page 24: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Meta-learning in SBM schemeMeta-learning in SBM scheme

Start from kNN, k=1, all data & features, Euclidean distance, end with a new model biased for your data; greedy search is not optimal, use beam search.

k-NN 67.5/76.6%

+d(x,y); Canberra 89.9/90.7 %

+ si=(0,0,1,0,1,1); 71.6/64.4 %

+selection, 67.5/76.6 %

+k opt; 67.5/76.6 %

+d(x,y) + si=(1,0,1,0.6,0.9,1); Canberra 74.6/72.9 %

+d(x,y) + selection; Canberra 89.9/90.7 %

Page 25: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Heterogeneous systemsHeterogeneous systemsNext step: use components from different models. Problems requiring different scales (multiresolution).

2-class problems, two situations:

C1 inside the sphere, C2 outside.MLP: at least N+1 hyperplanes, O(N2) parameters. RBF: 1 Gaussian, O(N) parameters.

C1 in the corner defined by (1,1 ... 1) hyperplane, C2 outside.MLP: 1 hyperplane, O(N) parameters. RBF: many Gaussians, O(N2) parameters, poor approx.

Combination: needs both hyperplane and hypersphere!

Logical rule: IF x1>0 & x2>0 THEN C1 Else C2

is not represented properly neither by MLP nor RBF!

Different types of functions in one model, first step beyond inspirations from single neurons => heterogeneous models are inspired by neural minicolumns, more complex information processing.

Page 26: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Heterogeneous everythingHeterogeneous everythingHomogenous systems: one type of “building blocks”, same type of decision borders, ex: neural networks, SVMs, decision trees, kNNsCommittees combine many models together, but lead to complex models that are difficult to understand.

Ockham razor: simpler systems are better. Discovering simplest class structures, inductive bias of the data, requires Heterogeneous Adaptive Systems (HAS).

HAS examples:NN with different types of neuron transfer functions.k-NN with different distance functions for each prototype.Decision Trees with different types of test criteria.

1. Start from large network, use regularization to prune.2. Construct network adding nodes selected from a candidate pool.3. Use very flexible functions, force them to specialize.

Page 27: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Taxonomy - TFTaxonomy - TF

Page 28: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

HAS decision treesHAS decision treesDecision trees select the best feature/threshold value for univariate and multivariate trees:

Decision borders: hyperplanes.

Introducing tests based on La Minkovsky metric.

or ; ,i k k i i ki

X T W X X W

Such DT use radial kernel features!

For L2 spherical decision border are produced.

For L∞ rectangular border are produced.

For large databases first clusterize data to get candidate references R.

1/; , R i i R

i

T X R

X R X R

Page 29: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

SSV HAS DT exampleSSV HAS DT example

SSV HAS tree in GhostMiner 3.0, Wisconsin breast cancer (UCI)699 cases, 9 features (cell parameters, 1..10)Classes: benign 458 (65.5%) & malignant 241 (34.5%).

Single rule gives simplest known description of this data:

IF ||X-R303|| < 20.27 then malignant

else benign coming most often in 10xCV

Accuracy = 97.4%, good prototype for malignant case!

Gives simple thresholds, that’s what MDs like the most!

Best 10CV around 97.5±1.8% (Naïve Bayes + kernel, or opt. SVM)

SSV without distances: 96.4±2.1%

C 4.5 gives 94.7±2.0%

Several simple rules of similar accuracy but different specificity or sensitivity may be created using HAS DT. Need to select or weight features and select good prototypes.

Page 30: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Maximization of margin/regularization

Among all discriminating hyperplanes there is one defined by support vectors that is clearly better.

Page 31: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Kernels = similarity functionsGaussian kernels in SVM: zi (X)=G(X;XI ,s) radial features, X=>ZGaussian mixtures are close to optimal Bayesian errors. Solution requires continuous deformation of decision borders and is therefore rather easy.

Support Feature Machines (SFM): construct features based on projections, restricted linear combinations, kernel features, use feature selection.

Gaussian kernel, C=1.In the kernel space Z decision borders are flat, but in the X space highly non-linear!

SVM is based on quadratic solver, without explicit features, but using Z features explicitly has some advantages: Multiresolution (Locally Optimized Kernels): different s for different support features, or using several kernels zi (X)=K(X;XI ,s). Use linear solvers, neural network, Naïve Bayes, or any other algorithm, all work fine.

Page 32: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Thyroid screening, network solutionThyroid screening, network solution

Garavan Institute, Sydney, Australia

15 binary, 6 continuous

Training: 93+191+3488 Validate: 73+177+3178

•Determine important clinical factors

•Calculate prob. of each diagnosis.

Hiddenunits

Finaldiagnoses

TSH

T4U

Clinical findings

Agesex……

T3

TT4

TBG

Normal

Hyperthyroid

Hypothyroid

Poor results of SBL and SVM … needs decision borders with sharp corners due to the inherent logic based on thresholding by medical experts.

Page 33: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Hypothyroid dataHypothyroid data2 years real medical screening tests for thyroid diseases, 3772 cases with 93 primary hypothyroid and 191 compensated hypothyroid, the remaining 3488 cases are healthy; 3428 test, similar class distribution. 21 attributes (15 binary, 6 continuous) are given, but only two of the binary attributes (on thyroxine, and thyroid surgery) contain useful information, therefore the number of attributes has been reduced to 8.

Method % train % test error

SFM, SSV+2 B1 features ------- 0.4 SFM, SVMlin+2 B1 features ------- 0.5 MLP+SVNT, 4 neurons 0.2 0.8 Cascade correlation 0.0 1.5MLP + backprop 0.4 1.5 SVM Gaussian kernel 0.2 1.6 SVM lin 5.9 6.7

Page 34: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Hypothyroid dataHypothyroid data

Page 35: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

How much can we learn?Linearly separable or almost separable problems are relatively simple – deform or add dimensions to make data separable.

How to define “slightly non-separable”? There is only separable and the vast realm of the rest.

Page 36: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Linear separabilityLinear separability

QPC projection used to visualize Leukemia microarray data.

2-separable data, separated in vertical dimension.

Page 37: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Approximate separabilityApproximate separability

QPC visualization of Heart dataset: overlapping clusters, information in the data is insufficient for perfect classification, approximately 2-separable.

Page 38: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Easy problemsEasy problems• Approximately linearly separable problems in

the original feature space: linear discrimination is sufficient (always worth trying!).

• Simple topological deformation of decision borders is sufficient – linear separation is then possible in extended/transformed spaces.

This is frequently sufficient for pattern recognition problems (more than half of UCI problems).

• RBF/MLP networks with one hidden layer also solve such problems easily, but convergence/generalization for anything more complex than XOR is problematic.

SVM adds new features to “flatten” the decision border:

( )1 2( , ,... ); ,in ix x x z K X X X X

achieving larger margins/separability in the X+Z space.

Page 39: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Neurons learning complex logicBoole’an functions are difficult to learn, n bits but 2n nodes => combinatorial complexity; similarity is not useful, for parity all neighbors are from the wrong class. MLP networks have difficulty to learn functions that are highly non-separable.

Projection on W=(111 ... 111) gives clusters with 0, 1, 2 ... n bits;

easy categorization in (n+1)-separable sense.

Ex. of 2-4D parity problems.

Neural logic can solve it without counting; find a good point of view.

Page 40: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Easy and difficult problemsEasy and difficult problemsLinear separation: good goal if simple topological deformation of decision borders is sufficient.

Linear separation of such data is possible in higher dimensional spaces; this is frequently the case in pattern recognition problems.

RBF/MLP networks with one hidden layer solve such problems.

Difficult problems: disjoint clusters, complex logic.

Continuous deformation is not sufficient; networks with localized functions need exponentially large number of nodes.

Boolean functions: for n bits there are K=2n binary vectors that can be represented as vertices of n-dimensional hypercube.

Each Boolean function is identified by K bits.

BoolF(Bi) = 0 or 1 for i=1..K, leads to the 2K Boolean functions.Ex: n=2 functions, vectors {00,01,10,11},

Boolean functions {0000, 0001 ... 1111}, ex. 0001 = AND, 0110 = OR,

each function is identified by number from 0 to 15 = 2K-1.

Page 41: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Boolean functionsBoolean functionsn=2, 16 functions, 12 separable, 4 not separable.

n=3, 256 f, 104 separable (41%), 152 not separable.

n=4, 64K=65536, only 1880 separable (3%)

n=5, 4G, but << 1% separable ... bad news!

Existing methods may learn some non-separable functions, but in practice most functions cannot be learned !

Example: n-bit parity problem; many papers in top journals.

No off-the-shelf systems are able to solve such problems.

For all parity problems SVM is below base rate!

Such problems are solved only by special neural architectures or special classifiers – if the type of function is known.

But parity is still trivial ... solved by

1

cosn

ii

y b

Page 42: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Goal of learningGoal of learningIf simple topological deformation of decision borders is sufficient linear separation is possible in higher dimensional spaces, “flattening” non-linear decision borders, kernel approaches are sufficient. RBF/MLP networks with one hidden layer solve the problem. This is frequently the case in pattern recognition problems.

For complex logic this is not sufficient; networks with localized functions need exponentially large number of nodes.

Such situations arise in AI reasoning problems, real perception, object recognition, text analysis, bioinformatics ...

Linear separation is too difficult, set an easier goal. Linear separation: projection on 2 half-lines in the kernel space:

line y=WX, with y<0 for class – and y>0 for class +.

Simplest extension: separation into k-intervals, or k-separability.

For parity: find direction W with minimum # of intervals, y=W.X

Page 43: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

NN as data transformationsNN as data transformations

Vector mappings from the input space to hidden space(s) and to the output space + adapt parameters to improve cost functions.

Hidden-Output mapping done by MLPs:

T = {Xi} training data, N-dimensional.

H = {hj(T)} X image in the hidden space, j =1 .. NH-dim.

... more transformations in hidden layers

Y = {yk(H )} X image in the output space, k =1 .. NC-dim.

ANN goal:

data image H in the last hidden space should be linearly separable; internal representations will determine network generalization.

But we never look at these representations!

Page 44: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

T-based meta-learningT-based meta-learning

To create successful meta-learning through search in the model space fine granulation of methods is needed, extracting info using support features, learning from others, knowledge transfer and deep learning.

Learn to compose, using complexity guided search, various transformations (neural or processing layers), for example:

• Creation of new support features: linear, radial, cylindrical, restricted localized projections, binarized … feature selection or weighting.

• Specialized transformations in a given field: text, bio, signal analysis, ….• Matching pursuit networks for signal decomposition, QPC index, PCA or ICA

components, LDA, FDA, max. of mutual information etc. • Transfer learning, granular computing, learning from successes: discovering

interesting higher-order patterns created by initial models of the data.• Stacked models: learning from the failures of other methods.• Schemes constraining search, learning from the history of previous runs at

the meta-level.

Page 45: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Network solutionNetwork solutionCan one learn a simplest model for arbitrary Boolean function?

2-separable (linearly separable) problems are easy; non separable problems may be broken into k-separable, k>2.

Blue: sigmoidal neurons with threshold, brown – linear neurons.

X1

X2

X3

X4

y=W.

X

+1

-1

+1-1

s(by+q1)

s(by+q2)

+1

+1+1+1

s(by+q4)

Neural architecture for k=4 intervals, or 4-separable problems.

Page 46: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

8-bit parity solution8-bit parity solution

QCP solution to 8-bit parity data: projection on W=[1,1…1] diagonal.k-separability is much easier to achieve than full linear separability.

Page 47: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Example: aRPMExample: aRPMaRMP, Almost Random Projection Machine (with Hebbian learning): generate random combinations of inputs (line projection) z(X)=W.X, find and isolate pure cluster h(X)=G(z(X)); estimate relevance of h(X), ex. MI(h(X),C), leave only good nodes;continue until each vector activates minimum k nodes.Count how many nodes vote for each class and plot: no LDA needed! No need for learning at all!

Page 48: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

QPC Projection PursuitQPC Projection PursuitWhat is needed to learn data with complex logic?• cluster non-local areas in the X space, use W.X• capture local clusters after transformation, use G(W.X-q) SVMs fail because the number of directions W that should be considered grows exponentially with the size of the problem n.What will solve it? Projected clusters!

1. A class of constructive neural network solution with G(W.X-q) functions combining non-local/local projections, with special training algorithms.

2. Maximize the leave-one-out error after projection: take some localized function G, count in a soft way cases from the same class as Xk.

Grouping and separation; projection may be done directly to 1 or 2D for visualization, or higher D for dimensionality reduction, if W has d columns.

k k

k kC C

Q A G A G

X X X

W W X X W X X

Page 49: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Parity n=9Parity n=9

Simple gradient learning; QCP quality index shown below.

Page 50: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Learning hard functionsLearning hard functions

Training almost perfect for parity, with linear growth in the number of vectors for k-sep. solution created by the constructive neural algorithm.

Page 51: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Real dataReal data

On simple data results are similar as from SVM (because they are almost optimal), but c3sep models are much simpler although only 3-sep. assumed.

Page 52: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Only 13 out of 45 UCI problems are non-trivial, less than 30%!Only 13 out of 45 UCI problems are non-trivial, less than 30%!

For these problems G-SVM is significantly better than O(nd) methods. For these problems G-SVM is significantly better than O(nd) methods.

Page 53: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Trivial data examplesTrivial data examples

Data: dermatology

MC 31.0±1.0 1NP 96.9±3.2 LVQ 91.3±3.8 MLC 88.4±4.6 NB 90.1±4.5 K2M 94.9±3.8 LSVM 94.0±3.5 GSVM 94.5±3.9

Data: dermatology

MC 31.0±1.0 1NP 96.9±3.2 LVQ 91.3±3.8 MLC 88.4±4.6 NB 90.1±4.5 K2M 94.9±3.8 LSVM 94.0±3.5 GSVM 94.5±3.9

TA-evaluation

34.5± 2.750.9±12.733.1±3.648.6±12.850.7±12.552.1±12.013.3±9.942.4±9.4

TA-evaluation

34.5± 2.750.9±12.733.1±3.648.6±12.850.7±12.552.1±12.013.3±9.942.4±9.4

lymph

54.7±4.586.4±8.682.5±9.478.8±9.481.2±8.982.7±9.281.3±9.883.6±9.8

lymph

54.7±4.586.4±8.682.5±9.478.8±9.481.2±8.982.7±9.281.3±9.883.6±9.8

Page 54: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Non-trivial data examplesNon-trivial data examples

Data: car-evaluation

MC 70.0±0.2 1NP 73.2±2.9LVQ 73.6±3.6 MLC 84.1±2.5 NB 87.1±1.9 K2M 91.1±2.5 LSVM 69.6±2.0 GSVM 98.8±0.8

Data: car-evaluation

MC 70.0±0.2 1NP 73.2±2.9LVQ 73.6±3.6 MLC 84.1±2.5 NB 87.1±1.9 K2M 91.1±2.5 LSVM 69.6±2.0 GSVM 98.8±0.8

chess

52.2±0.186.3±1.361.4±15.283.9±1.788.1±1.390.9±3.396.2±1.499.3±0.4

chess

52.2±0.186.3±1.361.4±15.283.9±1.788.1±1.390.9±3.396.2±1.499.3±0.4

silhouettes

25.1±0.545.3±4.625.8±0.953.0±4.245.7±4.072.9±4.769.9±2.779.8±2.7

silhouettes

25.1±0.545.3±4.625.8±0.953.0±4.245.7±4.072.9±4.769.9±2.779.8±2.7

Page 55: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Non-trivial data examples: signalsNon-trivial data examples: signals

Data: sonar

MC 53.4±2.2 1NP 69.7±7.5LVQ 71.7±7.4 MLC 70.6±6.0 NB 69.0±8.7 K2M 76.7±8.1 LSVM 75.5±8.3 GSVM 85.5±5.3

Data: sonar

MC 53.4±2.2 1NP 69.7±7.5LVQ 71.7±7.4 MLC 70.6±6.0 NB 69.0±8.7 K2M 76.7±8.1 LSVM 75.5±8.3 GSVM 85.5±5.3

ionosphere

64.1±1.481.1±6.483.7±5.359.2±6.284.2±6.286.5±5.587.7±4.694.6±3.7

ionosphere

64.1±1.481.1±6.483.7±5.359.2±6.284.2±6.286.5±5.587.7±4.694.6±3.7

vowel

7.6±0.152.0±6.6 9.1±1.452.0±6.067.5±6.381.0±5.025.8±5.096.8±2.2

vowel

7.6±0.152.0±6.6 9.1±1.452.0±6.067.5±6.381.0±5.025.8±5.096.8±2.2

Page 56: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Non-trivial data examples: medicalNon-trivial data examples: medical

Data: thyroid cardio

MC 92.6±0.1 1NP 71.1±1.8LVQ 92.6±0.1 MLC 86.6±1.4 NB 95.5±0.4 K2M 94.7±2.2 LSVM 93.8±0.5 GSVM 97.5±0.7

Data: thyroid cardio

MC 92.6±0.1 1NP 71.1±1.8LVQ 92.6±0.1 MLC 86.6±1.4 NB 95.5±0.4 K2M 94.7±2.2 LSVM 93.8±0.5 GSVM 97.5±0.7

parkinson

75.4±3.273.6±8.777.8±6.978.2±8.569.8±9.185.6±7.686.3±10.293.3±5.6

parkinson

75.4±3.273.6±8.777.8±6.978.2±8.569.8±9.185.6±7.686.3±10.293.3±5.6

tocograph2

77.9±0.376.6±1.877.9±0.373.7±2.282.5±1.987.3±2.787.5±1.592.1±2.0

tocograph2

77.9±0.376.6±1.877.9±0.373.7±2.282.5±1.987.3±2.787.5±1.592.1±2.0

Page 57: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

RulesRules

QPC visualization of Monks artificial symbolic dataset, => two logical rules are needed.

Page 58: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Complex distributionComplex distribution

QPC visualization of concentric rings in 2D with strong noise in remaining 2D; transform: nearest neighbor solutions, combinations of ellipsoidal densities.

Page 59: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Knowledge transfer

Brains learn new concepts in terms of old; use large semantic network and add new concepts linking them to the known.

Knowledge should be transferred between the tasks, not just learned from a single dataset. aRMP does that.

Need to discover good building blocks for higher level concepts/features.

Page 60: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Learning from others … Learning from others …

Learn to transfer knowledge by extracting interesting features created by different systems. Ex. prototypes, combinations of features with thresholds … => Universal Learning Machines.

Classify all types of features – what type of info they extract?

B1: Binary – unrestricted projections b1 B2: Binary – complexes b1 ᴧ b2 … ᴧ bk

B3: Binary – restricted by distance

R1: Line – original real features ri; non-linear thresholds for “contrast enhancement“ s(ri-bi); intervals (k-sep).

R4: Line – restricted by distance, original feature; thresholds; intervals (k-sep); more general 1D patterns. P1: Prototypes: general q-separability, weighted distance functions or specialized kernels. M1: Motifs, based on correlations between elements rather than input values.

1 1 1 2 2 20 , , ...ib r r r r r r

Page 61: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

B1/B2 FeaturesB1/B2 Features

Dataset B1/B2 Features

Australian F8 < 0.5 F8 ≥ 0.5 ᴧ F9 ≥ 0.5

Appendicitis F7 ≥ 7520.5 F7 < 7520.5 ᴧ F4 < 12

Heart F13 < 4.5 ᴧ F12 < 0.5 F13 ≥ 4.5 ᴧ F3 ≥ 3.5

Diabetes F2 < 123.5 F2 ≥ 143.5

Wisconsin F2 < 2.5 F2 ≥ 4.5

Hypothyroid F17 < 0.00605 F17 ≥ 0.00605 ᴧ F21 < 0.06472

Example of B1 features taken from important segments of decision trees.These features used in various learning systems greatly simplify their models and increase their accuracy. Convert Decision Tree to Distance Functions for more!With these features almost all learning systems reach similar high accuracy!

Page 62: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Dataset Classifier

SVM (#SV) SSV (#Leafs) NB

Australian 84.9±5.6 (203) 84.9±3.9 (4) 80.3±3.8

ULM 86.8±5.3(166) 87.1±2.5(4) 85.5±3.4

Features B1(2) + P1(3) B1(2) + R1(1) + P1(3) B1(2)

Appendicitis 87.8±8.7 (31) 88.0±7.4 (4) 86.7±6.6

ULM 91.4±8.2(18) 91.7±6.7(3) 91.4±8.2

Features B1(2) B1(2) B1(2)

Heart 82.1±6.7 (101) 76.8±9.6 (6) 84.2±6.1

ULM 83.4±3.5(98) 79.2±6.3(6) 84.5±6.8

Features Data + R1(3) Data + R1(3) Data + B1(2)

Diabetes 77.0±4.9 (361) 73.6±3.4 (4) 75.3±4.7

ULM 78.5±3.6(338) 75.0±3.3(3) 76.5±2.9

Features Data + R1(3) + P1(4) B1(2) Data + B1(2)

Wisconsin 96.6±1.6 (46) 95.2±1.5 (8) 96.0±1.5

ULM 97.2±1.8(45) 97.4±1.6(2) 97.2±2.0

Features Data + R1(1) + P1(4) R1(1) R1(1)

Hypothyroid 94.1±0.6 (918) 99.7±0.5 (12) 41.3±8.3

ULM 99.5±0.4(80) 99.6±0.4(8) 98.1±0.7

Features Data + B1(2) Data + B1(2) Data + B1(2)

Page 63: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Support Feature MachinesSupport Feature Machines

General principle: complementarity of information processed by parallel interacting streams with hierarchical organization (Grossberg, 2000).Cortical minicolumns provide various features for higher processes.Create information that is easily used by various ML algorithms: explicitly build enhanced space adding more transformations.

• X , original features• Z=WX, random linear projections, other projections (PCA< ICA, PP)• Q = optimized Z using Quality of Projected Clusters or other PP techniques.• H=[Z1,Z2], intervals containing pure clusters on projections. • K=K(X,Xi), kernel features.• HK=[K1,K2], intervals on kernel features

Kernel-based SVM is equivalent to linear SVM in the explicitly constructed kernel space, enhancing this space leads to improvement of results. LDA is one option, but many other algorithms benefit from information in enhanced feature spaces; best results in various combination X+Z+Q+H+K+HK.

Page 64: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Universal Learning MachinesUniversal Learning Machines

Page 65: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Real meta-learning!Real meta-learning!Meta-learning: learning how to learn, replace experts who search for best models making a lot of experiments.

Search space of models is too large to explore it exhaustively, design system architecture to support knowledge-based search.

• Abstract view, uniform I/O, uniform results management.

• Directed acyclic graphs (DAG) of boxes representing scheme

• placeholders and particular models, interconnected through I/O.

• Configuration level for meta-schemes, expanded at runtime level.

An exercise in software engineering for data mining!

Page 66: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Intemi, Intelligent MinerIntemi, Intelligent Miner

Meta-schemes: templates with placeholders.

• May be nested; the role decided by the input/output types.

• Machine learning generators based on meta-schemes.

• Granulation level allows to create novel methods.

• Complexity control: Length of the program/errors + log(time)

• A unified meta-parameters description, defining the range of sensible values and the type of the parameter changes.

Page 67: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Advanced meta-learningAdvanced meta-learning• Extracting meta-rules, describing search directions.• Finding the correlations occurring among different items in

most accurate results, identifying different machine (algorithmic) structures with similar behavior in an area of the model space.

• Depositing the knowledge they gain in a reusable meta-knowledge repository (for meta-learning experience exchange between different meta-learners).

• A uniform representation of the meta-knowledge, extending expert knowledge, adjusting the prior knowledge according to performed tests.

• Finding new successful complex structures and converting them into meta-schemes (which we call meta abstraction) by replacing proper substructures by placeholders.

• Beyond transformations & feature spaces: actively search for info.

Intemi software (N. Jankowski and K. Grąbczewski) incorporatingthese ideas and more is coming “soon” ...

Page 68: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Meta-learning architectureMeta-learning architecture

Inside meta-parameter search a repeater machine composed of distribution and test schemes are placed.

Page 69: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Generating machinesGenerating machines

Search process is controlled by a variant of approximated Levin’s complexity: estimation of program complexity combined with time. Simpler machines are evaluated first, machines that work too long (approximations may be wrong) are put into quarantine.

Page 70: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Pre-compute what you canPre-compute what you can

and use “machine unification” to get substantial savings!

Page 71: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Complexities on vowel dataComplexities on vowel data

……………

Page 72: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Simple machines on vowel dataSimple machines on vowel data

Number on far left = final ranking.

Gray bar = accuracy

Small bars (up-down) show estimation of: total complexity,time, memory.

Numbers in the middle= process id (refer to models in the previous table).

Page 73: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Complex machines on vowel dataComplex machines on vowel data

Number on far left = final ranking.

Gray bar = accuracy

Small bars (up-down) show estimation of: total complexity,time, memory.

Numbers in the middle = process id (refer to models in the previous table).

Page 74: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

SummarySummary

1. Challenging data cannot be handled with existing DM tools.

2. Similarity-based framework enables meta-learning as search in the model space, heterogeneous systems add fine granularity.

3. No off-shelf classifiers are able to learn difficult Boolean functions.

4. Visualization of hidden neuron’s shows that frequently perfect but non-separable solutions are found despite base-rate outputs.

5. Linear separability is not the best goal of learning, other targets that allow for easy handling of final non-linearities may work better.

6. k-separability defines complexity classes for non-separable data.

7. Transformation-based learning shows the need for component-based approach to DM, discovery of simplest models and support features. Meta-learning replaces data miners automatically creating new optimal learning methods on demand.

Is this the final word in data mining? Only the future will tell.

Page 75: Meta-Learning:Towards Universal Learning Paradigms Włodzisław Duch & Co Department of Informatics, Nicolaus Copernicus University, Toruń, Poland Google:

Exciting times are coming!

Thank you for lending your ears!

Google: W. Duch => Papers & presentations; Meta-papers: http://www.is.umk.pl/projects/meta.html New Book: K. Grąbczewski, Meta-learning in Decision Tree Induction (Springer, 2013)