1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 财经新闻数据scrapy实战(东方财富网)

财经新闻数据scrapy实战(东方财富网)

时间:2018-12-27 11:17:49

相关推荐

财经新闻数据scrapy实战(东方财富网)

先看BeautifulSoup版本的

import requestsfrom bs4 import BeautifulSouplink_head='/news/cywjh_'link_end='.html'hd={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36'}for i in range(1,4):link=link_head+str(i)+link_endr=requests.get(link,headers=hd)r.encoding=r.apparent_encodingsoup=BeautifulSoup(r.text,'lxml')topic_list=soup.find_all('div',class_='text')for each in topic_list:title=each.find('p',class_='title')print(title.a.text.strip())print(each.a['href'])content=each.find('p',class_='info')print(content.text.strip())

scrapy版本(总结一下详细做法,方便后人学习)

打开命令行cmd,切换到选定目录下,我切换到桌面上

cd C:\Users\Heisenberg\Desktop

然后运行命令

scrapy startproject financeSpider

然后桌面就出现了这个工程文件夹

然后打开items.py,定义字段,修改后如下

import scrapyclass FinancespiderItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()title=scrapy.Field()link=scrapy.Field()content=scrapy.Field()

然后在cmd上输入

scrapy genspider finance

修改爬虫器finance.py进行网页解析,代码修改如下:

先看普通版

# -*- coding: utf-8 -*-import scrapyfrom bs4 import BeautifulSoupfrom financeSpider.items import FinancespiderItem#从这个工程fianceSpider.items中引入FinancespiderItemclass FinanceSpider(scrapy.Spider):name = 'finance'allowed_domains = ['']start_urls = ['/news/cywjh_1.html']url_head='/news/cywjh_'url_end='.html'def start_requests(self):#获取前3页的url地址for i in range(1,4):url=self.url_head+str(i)+self.url_endprint('当前页面是:',url)#对新闻列表发送Request请求yield scrapy.Request(url=url,callback=self.parse)def parse(self, response):#response.encoding=response.apparent_encodingsoup=BeautifulSoup(response.text,'lxml')title_list=soup.find_all('div',class_='text')for i in range(len(title_list)):#将数据封装到FinancespiderItem对象,字典型数据item=FinancespiderItem()title=title_list[i].find('p',class_='title')title=title.a.text.strip()link=title_list[i].a['href']#变成字典content=title_list[i].find('p',class_='info')content=content.text.strip()item['title']=title#item['link']=linkitem['content']=contentyield item#根据文章链接,发送request请求,并传递item参数#yield scrapy.Request(url=link,meta={'item':item},callback=self.parse2)#def parse2(self,response):#接收传递的item#item=response.meta['item']#解析提取文章内容#soup=BeautifulSoup(response.text,"lxml")#content=soup.find('p',class_='info')#content=content.text.strip()#content=content.text.strip()#content=content.replace('\n'," ")#print('hello,content',content)#item['content']=content#返回item,交给item pipeline#yield item

再看并行版:

# -*- coding: utf-8 -*-import scrapyfrom bs4 import BeautifulSoupfrom financeSpider.items import FinancespiderItem#从这个工程fianceSpider.items中引入FinancespiderItemclass FinanceSpider(scrapy.Spider):name = 'finance'allowed_domains = ['']start_urls = ['/news/cywjh_1.html']url_head='/news/cywjh_'url_end='.html'def start_requests(self):#获取前3页的url地址for i in range(1,4):url=self.url_head+str(i)+self.url_endprint('当前页面是:',url)#对新闻列表发送Request请求yield scrapy.Request(url=url,callback=self.parse)def parse(self, response):#response.encoding=response.apparent_encodingsoup=BeautifulSoup(response.text,'lxml')title_list=soup.find_all('div',class_='text')for i in range(len(title_list)):#将数据封装到FinancespiderItem对象,字典型数据item=FinancespiderItem()title=title_list[i].find('p',class_='title')title=title.a.text.strip()link=title_list[i].a['href']#变成字典#content=title_list[i].find('p',class_='info')#content=content.text.strip()item['title']=title#item['link']=link#item['content']=content#yield item#根据文章链接,发送request请求,并传递item参数yield scrapy.Request(url=link,meta={'item':item},callback=self.parse2)def parse2(self,response):#接收传递的itemitem=response.meta['item']#解析提取文章内容soup=BeautifulSoup(response.text,"lxml")content=soup.find('div',class_='b-review')content=content.text.strip()#content=content.text.strip()#content=content.replace('\n'," ")#print('hello,content',content)item['content']=content#返回item,交给item pipelineyield item

修改数据的存储文件pipelines.py,修改后如下:

class FinancespiderPipeline(object):#如果是反斜杠'\'记得转义file_path='C:\\Users\\Heisenberg\\Desktop\\financeSpider\\result.txt'def __init__(self):self.article=open(self.file_path,"w",encoding="utf-8")#定义管道的处理方法def process_item(self, item, spider):title=item['title']link=item['link']content=item['content']output=title+'\n'+link+'\n'+content+'\n'print('hello,output',output)self.article.write(output)return item

然后务必将settings.py的ITEM_PIPELINES取消注释

完成上述操作后在cmd中运行:

scrapy crawl finance

结果如下

发现问题是:并行版,文件存储的不是按照顺序存储的,是随机的,看爬取时间先后

并行版

普通版:

scrapy工程文件下载链接:financeSpider

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。