012345
2018-08-13 18:29
class my_proxy(object): def process_request(self, request, spider): request.meta['proxy'] = 'http-cla.abuyun.com:9030' proxy_name_pass = b'H211EATS905745KC:F8FFBC929EB7D5A7' encode_pass_code = base64.b64decode(proxy_name_pass) request.headers['Proxy-Authrization'] = 'Basic '+ encode_pass_code.decode()
import base64 # 代理服务器 proxyServer = "http://http-dyn.abuyun.com:9020" # 代理隧道验证信息 proxyUser = "H01234567890123D" proxyPass = "0123456789012345" # for Python2 proxyAuth = "Basic " + base64.b64encode(proxyUser + ":" + proxyPass) # for Python3 #proxyAuth = "Basic " + base64.urlsafe_b64encode(bytes((proxyUser + ":" + proxyPass), "ascii")).decode("utf8") class ProxyMiddleware(object): def process_request(self, request, spider): request.meta["proxy"] = proxyServer request.headers["Proxy-Authorization"] = proxyAuth
为什么我瞎配置都日志显示成功?感觉没有用到隧道啊,settings里面已经开了
上面写着到期时间是6月3号的,过期了还怎么用啊
老师,问一下使用splash scrapy和阿布云,代理返回502错误是怎么回事啊,弄了好久了,我就改了这一行
request.meta['splash']['args']['proxy'] = settings['PROXY_SERVER']
跪求跪求
在代理前+http://就好了,亲测有效
class
my_proxy(
object
):
def
process_request(
self
, request, spider):
request.meta[
'proxy'
]
=
'http://http-cla.abuyun.com:9030'
proxy_name_pass
=
b
'H211EATS905745KC:F8FFBC929EB7D5A7'
encode_pass_code
=
base64.b64decode(proxy_name_pass)
request.headers[
'Proxy-Authrization'
]
=
'Basic '
+
encode_pass_code.decode()
求问你成功了吗?
买的呀
我也报错了Proxy Authentication Required
第五行写错了,应该是encode_pass_code =
base64.b64encode(proxy_name_pass)
class my_proxy(object): def process_request(self, request, spider): request.meta['proxy'] = 'http-cla.abuyun.com:9030' proxy_name_pass = b'H211EATS905745KC:F8FFBC929EB7D5A7' encode_pass_code = base64.b64decode(proxy_name_pass) request.headers['Proxy-Authrization'] = 'Basic '+ encode_pass_code.decode()
Python最火爬虫框架Scrapy入门与实践
67422 学习 · 235 问题
相似问题
回答 1
回答 1