笔记
一.反爬虫机制处理思路:
- 浏览器伪装、用户代理池;
- IP限制--------IP代理池;
- ajax、js异步-------抓包;
- 验证码-------打码平台。
二.散点知识:
- def process_request(): #处理请求
request.meta[“proxy”]=… #添加代理ip - scrapy中如果请求2次就会放弃,说明该代理ip不行。
实战操作
目标网址:http://weixin.sogou.com/weixin?type=2&query=python&ie=utf8
实现:关于python文章的抓取,抓取标题、标题链接、描述。如下图所示。
1.middlewares.py主要代码
# -*- coding: utf-8 -*-
import random
'''
遇到不懂的问题?Python学习交流群:821460695满足你的需求,资料都已经上传群文件,可以自行下载!
'''
from scrapy.downloadermiddlewares.httpproxy import HttpProxyMiddleware #代理ip,这是固定的导入
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware #代理UA,固定导入
class IPPOOLS(HttpProxyMiddleware):
def __init__(self,ip=''):
'''初始化'''
self.ip=ip
def process_request(self, request, spider):
'''使用代理ip,随机选用'''
ip=random.choice(self.ip_pools) #随机选择一个ip
print '当前使用的IP是'+ip['ip']
try:
request.meta["proxy"]="http://"+ip['ip']
except Exception,e:
print e
pass
ip_pools=[
{'ip': '124.65.238.166:80'},
# {'ip':''},
]
class UAPOOLS(UserAgentMiddleware):
def __init__(self,user_agent=''):
self.user_agent=user_agent
def process_request(self, request, spider):
'''使用代理UA,随机选用'''
ua=random.choice(self.user_agent_pools)
print '当前使用的user-agent是'+ua
try:
request.headers.setdefault('User-Agent',ua)
except Exception,e:
print e
pass
user_agent_pools=[
'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3',
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36',
]
2.setting.py主要代码
1 DOWNLOADER_MIDDLEWARES = {
2 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':123,
3 'weixin.middlewares.IPPOOLS':124,
4 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware' : 125,
5 'weixin.middlewares.UAPOOLS':126
6 }