为什么使用Python和bs4无法抓取某些网页?

我得到这段代码的目的是获取 HTML 代码,并使用 bs4 来抓取它。


from urllib.request import urlopen as uReq

from bs4 import BeautifulSoup as soup


myUrl = '' #Here goes de the webpage.


# opening up connection and downloadind the page

uClient = uReq(myUrl) 

pageHtml = uClient.read()

uClient.close()


#html parse

pageSoup = soup(pageHtml, "html.parser")

print(pageSoup)

但是,它不起作用,这是终端显示的错误:


Traceback (most recent call last):

  File "main.py", line 7, in <module>

    uClient = uReq(myUrl)

  File "C:\ProgramData\Anaconda3\lib\urllib\request.py", line 222, in urlopen

    return opener.open(url, data, timeout)

  File "C:\ProgramData\Anaconda3\lib\urllib\request.py", line 531, in open

    response = meth(req, response)

  File "C:\ProgramData\Anaconda3\lib\urllib\request.py", line 640, in http_response

    response = self.parent.error(

  File "C:\ProgramData\Anaconda3\lib\urllib\request.py", line 569, in error

    return self._call_chain(*args)

  File "C:\ProgramData\Anaconda3\lib\urllib\request.py", line 502, in _call_chain

    result = func(*args)

  File "C:\ProgramData\Anaconda3\lib\urllib\request.py", line 649, in http_error_default

    raise HTTPError(req.full_url, code, msg, hdrs, fp)

urllib.error.HTTPError: HTTP Error 403: Forbidden


白衣染霜花
浏览 126回答 2
2回答

慕桂英4014372

您缺少网站可能需要的一些标头。我建议使用requestspackage 而不是urllib,因为它更灵活。请参阅下面的工作示例:import requestsurl = "https://www.idealista.com/areas/alquiler-viviendas/?shape=%28%28wt_%7BF%60m%7Be%40njvAqoaXjzjFhecJ%7BebIfi%7DL%29%29"querystring = {"shape":"((wt_{F`m{e@njvAqoaXjzjFhecJ{ebIfi}L))"}payload = ""headers = {&nbsp; &nbsp; 'authority': "www.idealista.com",&nbsp; &nbsp; 'cache-control': "max-age=0",&nbsp; &nbsp; 'upgrade-insecure-requests': "1",&nbsp; &nbsp; 'user-agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36",&nbsp; &nbsp; 'accept': "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",&nbsp; &nbsp; 'sec-fetch-site': "none",&nbsp; &nbsp; 'sec-fetch-mode': "navigate",&nbsp; &nbsp; 'sec-fetch-user': "?1",&nbsp; &nbsp; 'sec-fetch-dest': "document",&nbsp; &nbsp; 'accept-language': "en-US,en;q=0.9"&nbsp; &nbsp; }response = requests.request("GET", url, data=payload, headers=headers, params=querystring)print(response.text)从那里你可以使用 bs4 解析主体:pageSoup = soup(response.text, "html.parser")但是,请注意,您尝试抓取的网站可能会显示验证码,因此您可能需要轮换user-agent标头和 IP 地址。

繁华开满天机

您收到的 HTTP 403 错误意味着 Web 服务器拒绝了脚本发出的页面请求,因为它没有访问该页面的权限/凭据。我可以从这里访问您的示例中的页面,因此最有可能发生的情况是网络服务器注意到您正在尝试抓取它并禁止您的 IP 地址请求更多页面。Web 服务器通常这样做是为了防止抓取工具影响其性能。
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python