如何修复已爬网 (403)

我正在使用python 3和scrapy。我正在使用以下代码获取scrapy shell:


url = "https://www.urban.com.au/projects/melbourne-square-93-119-kavanagh-street-southbank"

headers = {

"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36"


}


fet = scrapy.Request(url, headers=headers)

fetch(fet)

它正在显示DEBUG: Crawled (403)


请在 scrapy shell 中与 return 200 响应分享任何想法。


摇曳的蔷薇
浏览 222回答 3
3回答

慕雪6442864

如果您在浏览器中打开它,它会显示填充验证码以继续。因此,对于来自计算机的高流量,它会要求额外的身份验证。因此你看到 403

LEATH

403 错误 - 因为网站显示验证码。如果解析验证码并提取 cookie,它将起作用。用于调试的示例:requestsimport requestsheaders = {    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36',    'cookie': 'your cookie',}response = requests.get('https://www.urban.com.au/projects/melbourne-square-93-119-kavanagh-street-southbank', headers=headers)

慕盖茨4494581

headers = {    'authority': 'www.urban.com.au',    'cache-control': 'max-age=0',    'upgrade-insecure-requests': '1',    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36',    'sec-fetch-mode': 'navigate',    'sec-fetch-user': '?1',    'dnt': '1',    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',    'sec-fetch-site': 'none',    'accept-encoding': 'gzip, deflate, br',    'accept-language': 'en-US,en;q=0.9',}Request('https://www.urban.com.au/projects/melbourne-square-93-119-kavanagh-street-southbank', headers=headers)您需要模仿与真实浏览器完全相同的标题
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python