demystifying machine learning using lime
Post on 21-Apr-2017
270 Views
Preview:
TRANSCRIPT
BLACK BOXMODELS
Machine learning modelsare often dismissed on thegrounds of lack ofinterpretability.When using advancedmodels it is nearlyimpossible to understandhow a model is making aprediction.
2
MNIST - ACCU VS # PARAMS
Notebook to create the plot3
LIME
stands for LocalInterpretable Model-agnostic Explanations, andits objective is to explainthe result from anyclassifier so that a humancan understand individualpredictions
LIME
4
LIMEAn interpretable representation is a point in a spacewhose dimensions can be interpreted by a human.LIME frames the search for an interpretableexplanation as an optimization problem. Given a setG of potentially interpretable models, we need ameasure L(f,g,x) of how poorly the interpretablemodel g∈∈G approximates the original model f forpoint x this is the loss function. We also need somemeasure Ω(g) of the complexity of the model (e.g. thedepth of a decision tree). We then pick a model whichminimizes both of these
ξ(x) = argmin g∈∈G L(f,g,x)+Ω(g)
5
LIME EXAMPLEExample Phishing URL
Phishing probability1.0
Url = http://login.paypal.com.convexcentral.com/Update/ab770f624342b07b71e56c1bae5d9bcb/
11
LIME EXAMPLEExample Phishing URL
Phishing probability0.0283
Url = ...
http://www.redeyechicago.com/entertainment/tv/redeye-banshee-ivana-mili
12
THANK YOU
FULL NOTEBOOK INHTTPS://GITHUB.COM/ALBAHNSEN/TALK
_DEMYSTIFYING_MACHINE_LEARNING
13
top related