编写爬虫:
通过爬虫语言框架制作一个爬虫程序
import scrapy from tutorial.items import DmozItem class DmozSpider(scrapy.Spider): name = 'dmoz' allowed_domains = ['dmoz.org'] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/" ] def parse(self, response): sel = Selector(response) sites = sel.xpath('//ul[@class="directory-url"]/li') for sel in sites: item = DmozItem() # 实例化一个 DmozItem 类 item['title'] = sel.xpath('a/text()').extract() item['link'] = sel.xpath('a/@href').extract() item['desc'] = sel.xpath('text()').extract() yield item
爬虫爬取:
通过爬虫程序输入命令,执行爬虫采集目标网站
#! -*- encoding:utf-8 -*- import base64 import sys import random PY3 = sys.version_info[0] >= 3 def base64ify(bytes_or_str): if PY3 and isinstance(bytes_or_str, str): input_bytes = bytes_or_str.encode('utf8') else: input_bytes = bytes_or_str output_bytes = base64.urlsafe_b64encode(input_bytes) if PY3: return output_bytes.decode('ascii') else: return output_bytes class ProxyMiddleware(object): def process_request(self, request, spider): # 代理服务器(产品官网 www.16yun.cn) proxyHost = "t.16yun.cn" proxyPort = "31111" # 代理验证信息 proxyUser = "username" proxyPass = "password"
数据保存:
Scrapy爬虫方式一般分为4种,可以参考以下保存方式
json格式,默认为Unicode编码
scrapy crawl itcast -o teachers.json
json lines格式,默认为Unicode编码
scrapy crawl itcast -o teachers.jsonl
csv 逗号表达式,可用Excel打开
scrapy crawl itcast -o teachers.csv
xml格式
scrapy crawl itcast -o teachers.xml