serverless functions and machine learning: putting the ai in apis · 2019-10-28 · serverless...
TRANSCRIPT
Serverless Functions and Machine Learning: putting the AI in APIs
Jon Peck
Making state-of-the-art algorithms
discoverable and accessible to everyone
Fullstack Developer & Advocate
@peckjon
bit.ly/ai-in-api
Agenda
● What's Machine Learning?
● How do we use it? Why APIs?
● Off-the-shelf ML
● Hosting your own models
● Moving to production
● Combining models
● Tutorials & demo solutions
2
What is Machine Learning?
● For certain tasks, figuring out the exact, handwritten code could take years, or isn’t yet possible
● Often, we hire humans to do it ○ Recognize images ○ Read handwriting ○ Label reading
● Machine Learning is like having an
army of workers which do one thing
● Train a machine to figure out the problem, then run it whenever you need on new input
3
ML already surrounds us: how will you use it?
4
ML already surrounds us: how will you use it? E-Commerce:
Object Detection Customer Segmentation Recommender Engines
Content: Keyword Extraction Auto Summarize Named Entity Recognition Language Translation Face Detect/Recognize
And many more: Nudity Detection Profanity Detection Fraud Detection Sentiment Analysis Stable Roommate
5
A few of my Favorite Algorithms
How to use ML, part 1: Off-the-shelf APIs
7
Use just like any other API: 1. Register with the provider 2. Learn their API structure, write integration code (varies by service) 3. Call ad-hoc, usually billed by number of calls (cost varies by service)
Algorithmia’s ML Marketplace: ● Appx 9,500 ML & Util Algorithms from 100K+ developers: documented, categorized, peer-rated ● Unified under a single API with cut-and-paste code in every language, and on-page live testing ● Per-compute-second billing (no overhead or GPU upcharges, no subscriptions)
How to use ML, part 1: Off-the-shelf APIs
8
How to use ML, part 1: Off-the-shelf APIs
9
Algorithmia’s ML Marketplace: ● Appx 9,500 ML & Util Algorithms from 100K+ developers: documented, categorized, peer-rated ● Unified under a single API with cut-and-paste code in every language, and on-page live testing ● Per-compute-second billing (no overhead or GPU upcharges, no subscriptions)
How to build a model (you’re gonna need a datascientist): ● Get lots of training data, label it ● Select an appropriate model, train it, adjust hyperparameters ● Repeat until the model validates on test data
How to use ML, part 2: Host your own models!
10 10
How to run a model (aka inference/prediction): ● Copy your trained model to webserver dedicated server ● Write encapsulation code to load model, provide API endpoint ● Call the endpoint from your app, service, etc
How to use ML, part 2: Host your own models!
11 11
● Many languages: Python, but also C/C++, Java, R…
● Many frameworks: SKLearn, NLTK, TF, PyTorch, CNTK, Keras…
● Processor and memory limitations, competition for resources
● GPUs for Deep Learning?
● Security considerations
● Scaling and geo-distribution
Wait… why NOT just run ML on the webserver?
1. Set up server ○ Select proper balance of CPU, GPU, memory, cost
○ Laborious to configure first time, but fairly easy to replicate
○ Expensive for higher-powered machines (especially GPUs)
2. Create microservice ○ Write API wrapper (e.g., Flask)
○ Will be usable from any language, environment
○ How to secure, meter, disseminate?
3. Add scaling ○ Cloud VMs can scale by adding more copies
(usu billed per machine-hour)
○ Write/config automation to predict load & create VMs
4. Repeat for each unique environment ○ Separate server for each model?
○ Or deal with dependency & resource conflicts?
Flask source: Jeff Klukas 13
What about dedicated ML server(s)?
14
Initially, this works!
● A few models, 1-2 languages/frameworks
● Dedicated hardware or VM Hosting
● IT Team does DevOps
● High time-to-deploy, manual discoverability
● Few end-users, heterogenous APIs
What about dedicated ML server(s)?
But pretty soon...
● Hundreds of models on many runtimes / frameworks
● Heterogenous, largely unpredictable
● Each model: 1 to 1,000 calls/second, a lot of variance
● Need auto-deploy, autoscale, discoverability, low latency
● Common API, composability, fine-grained security
MACHINE LEARNING != PRODUCTION MACHINE LEARNING
● Initially, this looks great ○ Simple setup: just fill out a function body
○ Automatic API wrappers or configurable API gateway
○ No DevOps: maintenance handled by provider
○ Instant, elastic scaling (big cost savings)
○ Cheap: usu billed per-second, and free when not in use
● But there are some significant limitations ○ Not optimized for ML
○ Languages: Node & some Python, Java, C#
○ Limited dependency support
○ No GPUs!
○ Max execution time: 5-15 minute
○ Little/no consumer-facing UI
15
Should we try serverless functions?
16
Amazon Sagemaker
1. Train on Sagemaker (limited langs/frameworks)
2. Or configure and build a container locally
3. Create an ML Endpoint and deploy container
4. Write surrounding logic as a separate Lambda
5. Scaling/cost varies by EC2 type & attached resources
Instead, consider model-hosting solutions
Google MLE
1. Train on Google MLE
2. Deploy model to MLE Endpoint
3. Write surrounding logic as separate GAE or Cloud Func
4. MLE Endpoints autoscale, costs are per-minute
5. Surrounding GAE or Cloud Func scale/cost separately
Azure ML Service
1. Train, then register model w/ ML Service (limited langs)
2. Define an entry script with surrounding logic
3. Define an inference config and a deployment config
4. Deploy to Azure’s Container or Kubernetes Svcs
5. Scaling/cost varies by machine type & resources
Algorithmia
1. Train on any platform, copy model (or connect datastore)
2. Copy over the same prediction code you’d use locally
(optionally add any other logic you want)
3. Full autoscaling, costs are per-second for execution only
(no overhead or GPU charges)
17
Algorithmia’s famous 5-minute model deploy
18
Need to host models in your private cloud? ALGORITHMIA ENTERPRISE - your company’s private ML inventory & model-as-a-service platform
Deploy
Develop models
in any language,
framework, or
infrastructure
Scale
Expose models as
highly-reliable
versioned APIs that
autoscale to 100s
of reqs/second
Discover
Describe your
model in a central
catalog where
peers can easily
discover & use it
Monitor
House thousands of
models under one
roof with a uniform
REST interface and a
central dashboard
19
1. Image URL (thx suddenlycat.com!)
2. Object Detection 3. Get Tweets by
Keyword & Analyze
4. Cloudinary image
transformation & CDN
CloudinaryUrl
Bonus round: combine your models with others’
ObjectDetectionCOCO
AnalyzeTweets
20
1. MemeGenerator
MemeGenerator
Try it at: https://algorithmia.com/algorithms/jpeck/MemeGenerator
Cloudinary cookbooks: https://cloudinary.com/cookbook
...into a single API call
21
Tutorial #1: building an app bit.ly/build-ml-fullstack
22
Tutorial #2: deploy a model bit.ly/algodev > digit_recognition
Do we have time for some demos?
• Timeseries Analysis
• Lots more: demos.algorithmia.com
• Colorize Photos
• Transform Videos
• Metadata Extraction
23
Jon Peck Developer & Advocate
FREE STUFF
$50 free at Algorithmia.com signup code: ai-in-api
WE ARE HIRING
algorithmia.com/jobs ● Seattle or Remote ● Bright, collaborative env ● Unlimited PTO ● Dog-friendly
@peckjon
bit.ly/ai-in-api
THANK YOU!
Appendix
25
Jon Peck Developer & Advocate
FREE STUFF
$50 free at Algorithmia.com signup code: ai-in-api
WE ARE HIRING
algorithmia.com/jobs ● Seattle or Remote ● Bright, collaborative env ● Unlimited PTO ● Dog-friendly
@peckjon
bit.ly/ai-in-api
THANK YOU!