web crawling modeling with scrapy models #tdc2014

17
Scrapy Models

Upload: bruno-rocha

Post on 13-May-2015

5.300 views

Category:

Internet


0 download

DESCRIPTION

Modeling web crawlers with Scrapy and ScrapyModels

TRANSCRIPT

Page 1: Web Crawling Modeling with Scrapy Models #TDC2014

Scrapy Models

Page 2: Web Crawling Modeling with Scrapy Models #TDC2014

Web Crawling

- Capturar conteudo desestruturado da web HTML, XML(?), Texto puro…- Parsear, validar e armazenar- Automatizar o processo

Page 3: Web Crawling Modeling with Scrapy Models #TDC2014

{'links': [u'http://www.python.org/~guido/',

u'http://neopythonic.blogspot.com/',

u'http://www.artima.com/weblogs/index.jsp?

blogger=guido',

u'http://python-history.blogspot.com/',

u'http://www.python.org/doc/essays/cp4e.html',

u'http://www.twit.tv/floss11',

u'http://www.computerworld.com.au/index.php/id;

66665771',

u'http://www.stanford.edu/class/ee380/Abstracts/081105.

html',

u'http://stanford-online.stanford.

edu/courses/ee380/081105-ee380-300.asx'],

'name': u'Guido van Rossum',

'nationality': u'Dutch',

'photo_url': 'http://en.m.wikipedia.org//wiki/File:

Guido_van_Rossum_OSCON_2006.jpg',

'url': 'http://en.m.wikipedia.org/wiki/Guido_van_Rossum'}

Page 4: Web Crawling Modeling with Scrapy Models #TDC2014

PYTHON ROCKS!!!

- LXML- HTMLParser- Beautiful Soup- Scrapy- XMLToDict- Requests

Page 5: Web Crawling Modeling with Scrapy Models #TDC2014
Page 6: Web Crawling Modeling with Scrapy Models #TDC2014

Beautiful Soupimport requestsfrom bs4 import BeautifulSouphtml = requests.get(“http://schblaums.com”)soup = BeautifulSoup(html)user_picture = soup.find_all(“img”)[<Soup Object …. >, ...]user_picture[0].expand<img src=”/user/picture.jpg” />

Page 7: Web Crawling Modeling with Scrapy Models #TDC2014

LXML

from lxml import etreetree = etree.parse(html)user_pictures = tree.xpath(“*./img”)[<tree node>, <tree node>, …]

Page 8: Web Crawling Modeling with Scrapy Models #TDC2014

CSSjQuery

Page 9: Web Crawling Modeling with Scrapy Models #TDC2014

Scrapy

- Framework de web crawling- Automatização do processo- Validação- Mapeamento de dados- Seletores com suporte a XPATH ou CCC

Page 10: Web Crawling Modeling with Scrapy Models #TDC2014

Scrapy Model

from mongoengine|django|* import Modelclass Person(Model): name = StringField() links = ListField() picture = ImageField()

Page 11: Web Crawling Modeling with Scrapy Models #TDC2014

{'links': [u'http://www.python.org/~guido/',

u'http://neopythonic.blogspot.com/',

u'http://www.artima.com/weblogs/index.jsp?

blogger=guido',

u'http://python-history.blogspot.com/',

u'http://www.python.org/doc/essays/cp4e.html',

u'http://www.twit.tv/floss11',

u'http://www.computerworld.com.au/index.php/id;

66665771',

u'http://www.stanford.edu/class/ee380/Abstracts/081105.

html',

u'http://stanford-online.stanford.

edu/courses/ee380/081105-ee380-300.asx'],

'name': u'Guido van Rossum',

'nationality': u'Dutch',

'photo_url': 'http://en.m.wikipedia.org//wiki/File:

Guido_van_Rossum_OSCON_2006.jpg',

'url': 'http://en.m.wikipedia.org/wiki/Guido_van_Rossum'}

Page 12: Web Crawling Modeling with Scrapy Models #TDC2014
Page 13: Web Crawling Modeling with Scrapy Models #TDC2014

What is scrapy_model ?It is just a helper to create scrapers using the Scrapy Selectors allowing you to select elements by CSS or by XPATH and structuring your scraper via Models (just like an ORM model) and plugable to an ORM model via populate method.

Import the BaseFetcherModel, CSSField or XPathField (you can use both)

from scrapy_model import BaseFetcherModel, CSSField

Go to a webpage you want to scrap and use chrome dev tools or firebug to figure out the css paths then

considering you want to get the following fragment from some page.

<span id="person">Bruno Rocha <a href="http://brunorocha.org">website</a></span>

class MyFetcher(BaseFetcherModel):

name = CSSField('span#person')

website = CSSField('span#person a')

# XPathField('//xpath_selector_here')

Page 14: Web Crawling Modeling with Scrapy Models #TDC2014

Multiple queries in a single field

You can use multiple queries for a single field

name = XPathField(

['//*[@id="8"]/div[2]/div/div[2]/div[2]/ul',

'//*[@id="8"]/div[2]/div/div[3]/div[2]/ul']

)

In that case, the parsing will try to fetch by the first query and returns if finds a match, else it will try the subsequent

queries until it finds something, or it will return an empty selector.

Page 15: Web Crawling Modeling with Scrapy Models #TDC2014

Finding the best match by a query validator

If you want to run multiple queries and also validates the best match you can pass a validator function which will take the scrapy

selector an should return a boolean.

Example, imagine you get the "name" field defined above and you want to validates each query to ensure it has a 'li' with a text

"Schblaums" in there.

def has_schblaums(selector):

for li in selector.css('li'): # takes each <li> inside the ul selector

li_text = li.css('::text').extract() # Extract only the text

if "Schblaums" in li_text: # check if "Schblaums" is there

return True # this selector is valid!

return False # invalid query, take the next or default value

class Fetcher(....):

name = XPathField(

['//*[@id="8"]/div[2]/div/div[2]/div[2]/ul',

'//*[@id="8"]/div[2]/div/div[3]/div[2]/ul'],

query_validator=has_schblaums, default="undefined_name")

Page 16: Web Crawling Modeling with Scrapy Models #TDC2014

Every method named parse_<field> will run after all the fields are fetched for each field.

def parse_name(self, selector):

# here selector is the scrapy selector for 'span#person'

name = selector.css('::text').extract()

return name

def parse_website(self, selector):

# here selector is the scrapy selector for 'span#person a'

website_url = selector.css('::attr(href)').extract()

return website_url

after defined need to run the scraper

fetcher = Myfetcher(url='http://.....') # optionally you can use cached_fetch=True to cache requests on redis

fetcher.parse()

Page 17: Web Crawling Modeling with Scrapy Models #TDC2014

https://github.com/rochacbruno/scrapy_model