继续浏览精彩内容
慕课网APP
程序员的梦工厂
打开
继续
感谢您的支持,我会继续努力的
赞赏金额会直接到老师账户
将二维码发送给自己后长按识别
微信支付
支付宝支付

python爬虫抓取91处理网

慕数据7186066
关注TA
已关注
手记 10
粉丝 0
获赞 0

本人是个爬虫小萌新,看了网上教程学着做爬虫爬取91处理网www.91chuli.com,如果有什么问题请大佬们反馈,谢谢。

以下是用lxml来爬取的。

from lxml import etree

def getHTMLText(url):
    kv = {
        'cookie': 'ssids=1581214855718752; sfroms=JIAOYIMALL001; historyScanGame=%5B%225667%22%2Cnull%5D; session=1581214855718753-7; showFixGuideDialog=true'
        , 'user-agent': 'Mozilla/5.0'}
    r = requests.get(url, headers=kv)
    r.raise_for_status()
    r.encoding = 'utf-8'
    return r.text

def shixian(url):
    htmls = etree.HTML(url)
    mc = htmls.xpath(
        '//div[@class="wrap"]/div[@class="mod-con sel-content "]/div[@class="bd"]/ul[@class="list-con specialList"]/li[@name="goodsItem"]/span[@class="name"]/span[@class="is-account"]/a/text()')
    price = htmls.xpath(
        '//div[@class="wrap"]/div[@class="mod-con sel-content "]/div[@class="bd"]/ul[@class="list-con specialList"]/li[@name="goodsItem"]/span[@class="price"]')
    count = 0
    tplt = "{:4}\t{:12}\t{:20}"
    print(tplt.format("91处理网"))
    for i in range(len(mc)):
        count = count + 1
        print(tplt.format(count, price[i].text, mc[i], chr(12288)))

if __name__ == '__main__':
    url='https://www.91chuli.com/'
    lists=[]
    url=url+'-n'+'1'+'.html'
    html=getHTMLText(url)
    shixian(html)


打开App,阅读手记
0人推荐
发表评论
随时随地看视频慕课网APP