BeautifulSoup4 抓取不能超出网站的第一页

我试图从本网站的第一页爬到第 14 页:https : //cross-currents.berkeley.edu/archives?author=&title=&type=All&issue=All®ion=All 这是我的代码:


import requests as r

from bs4 import BeautifulSoup as soup

import pandas 


#make a list of all web pages' urls

webpages=[]

for i in range(15):

    root_url = 'https://cross-currents.berkeley.edu/archives?author=&title=&type=All&issue=All&region=All&page='+ str(i)

    webpages.append(root_url)

    print(webpages)


#start looping through all pages

for item in webpages:  

    headers = {'User-Agent': 'Mozilla/5.0'}

    data = r.get(item, headers=headers)

    page_soup = soup(data.text, 'html.parser')


#find targeted info and put them into a list to be exported to a csv file via pandas

    title_list = [title.text for title in page_soup.find_all('div', {'class':'field field-name-node-title'})]

    title = [el.replace('\n', '') for el in title_list]


#export to csv file via pandas

    dataset = {'Title': title}

    df = pandas.DataFrame(dataset)

    df.index.name = 'ArticleID'

    df.to_csv('example31.csv',encoding="utf-8")

输出 csv 文件仅包含最后一页的目标信息。当我打印“网页”时,它显示所有页面的网址都已正确放入列表中。我究竟做错了什么?先感谢您!


斯蒂芬大帝
浏览 128回答 2
2回答
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python