继续浏览精彩内容
慕课网APP
程序员的梦工厂
打开
继续
感谢您的支持,我会继续努力的
赞赏金额会直接到老师账户
将二维码发送给自己后长按识别
微信支付
支付宝支付

python爬取企查查江苏企业信息生成excel表格

慕姐8265434
关注TA
已关注
手记 1268
粉丝 222
获赞 1065

1.前期准备

具体请查看上一篇

2.准备库requests,BeautifulSoup,xlwt,lxml

1.BeautifulSoup:是专业的网页爬取库,方便抓取网页信息
2.xlwt:生成excel表格  
3.lxml:xml解析库

3.具体思路

企查查网站具有一定的反爬机制,直接爬取会受到网站阻拦,所以我们需要模拟浏览器请求,绕过反爬机制,打开企查查网站,获取cookie及一系列请求头文件,然后使用BeautifulSoup分析网页节点捕捉需要的信息

4.源码

# encoding: utf-8import requestsfrom bs4 import BeautifulSoupimport lxmlimport xlwtimport redef craw():
    file = xlwt.Workbook()
    table = file.add_sheet('sheet1', cell_overwrite_ok=True)
    print('正在爬取,请稍等....')    for n in range(1,500):
        print('第'+ str(n) + '页......')
        url = 'https://www.qichacha.com/g_JS_' + str(n) + '.html'
        user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
        headers = {            'Host': 'www.qichacha.com',            'User-Agent': r'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',            'Accept-Language': 'zh-CN,zh;q=0.9',            'Accept-Encoding': 'gzip, deflate, br',            'Referer': 'http://www.qichacha.com/',            'Cookie': r'zg_did=%7B%22did%22%3A%20%22166870cd07f60d-0c80294526eac7-36664c08-1fa400-166870cd0801af%22%7D; acw_tc=3af3b59815398640670163813ec3ddf30042b9b31607691a7b8d249c27; UM_distinctid=166870d292d85-016e1a972f471f-36664c08-1fa400-166870d292f349; _uab_collina=153986407973326937715323; QCCSESSID=g0gqbq7t1r8ksn94j8ii1qpbq1; CNZZDATA1254842228=364260894-1539860390-https%253A%252F%252Fwww.qichacha.com%252F%7C1540383468; Hm_lvt_3456bee468c83cc63fb5147f119f1075=1539864081,1540384169; zg_de1d1a35bfa24ce29bbf2c7eb17e6c4f=%7B%22sid%22%3A%201540384168992%2C%22updated%22%3A%201540384533698%2C%22info%22%3A%201539864055943%2C%22superProperty%22%3A%20%22%7B%7D%22%2C%22platform%22%3A%20%22%7B%7D%22%2C%22utm%22%3A%20%22%7B%7D%22%2C%22referrerDomain%22%3A%20%22%22%7D; Hm_lpvt_3456bee468c83cc63fb5147f119f1075=1540384534',            'Connection': 'keep-alive',            'If-Modified-Since': 'Wed, 24 Oct 2018 12:35:27 GMT',            'If-None-Match': '"59*******"',            'Cache-Control': 'private',
        }

        response = requests.get(url, headers=headers)        if response.status_code != 200:
            response.encoding = 'utf-8'
            print(response.status_code)
            print('ERROR')        # soup = BeautifulSoup(response.text, 'lxml')
        html_str = response.text
        soup = BeautifulSoup(html_str, 'html.parser')
        list = []        # list = soup.findAll(class_='panel panel-default')
        list = soup.findAll(class_='panel panel-default')        for i in range(len(list)):
            text = list[i]
            soup2 = BeautifulSoup(str(text), 'lxml')
            icon = soup2.find('img').attrs['src']
            table.write((n - 1) * 10 + i, 1, str(icon))
            name = soup2.find(class_='name').text
            table.write((n - 1) * 10 + i, 2, name)            try:
                content = soup2.findAll(class_='text-muted clear text-ellipsis m-t-xs')[0].text
                address = soup2.findAll(class_='text-muted clear text-ellipsis m-t-xs')[1].text
                table.write((n - 1) * 10 + i, 3, content)
                table.write((n - 1) * 10 + i, 4, address)            except:
                print('第'+str(n)+'页第'+str(i)+'行错误')


    file.save('D:/qcc.xls')if __name__ == '__main__':
    craw()

5.结果

webp

image.png



作者:XuJiaxin_
链接:https://www.jianshu.com/p/114665cc3d89


打开App,阅读手记
0人推荐
发表评论
随时随地看视频慕课网APP