1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > Python3 豆瓣《前任3》评论的词云

Python3 豆瓣《前任3》评论的词云

时间:2022-04-04 07:02:19

相关推荐

Python3 豆瓣《前任3》评论的词云

纪念下自己的过去

本是件技术活,无奈却也伤感了少许,《体面》这首歌单曲循环两个礼拜,每次深夜一两点设置四点后再睡去,也许现在的自己并不够优秀,只能一个劲的羡慕别人的五年小长跑,一辈子的长跑,而我,却再也不能回去了吧。不想一份感情像纸张一样,揉了又铺好,又揉。

她很好,只是我不够优秀

大学两年,异地一年,不同校,隔三差五就往她的学校跑,熟悉了两个校园,也习惯了有彼此的日子。她还在上学,我却早她工作了。在这里想她……..

不多说了,步入正题

数据来源

(一)来自豆瓣上的前任3评论(爬到不能爬为止,以后会完善)

贴上代码:

# -*- coding: utf-8 -*-# @Time : /3/27 11:15# @Author : 蛇崽# @Email : 643435675@# @File : test_douban_qianren3.py(再见前任3的影评)import csvimport requestsfrom lxml import etreeimport timefrom lxml import etreeurl = '/subject/26662193/comments?start=0&limit=20&sort=new_score&status=P&percent_type='headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36','Cookie': 'gr_user_id=ffdf2f63-ec37-49b5-99e8-0e0d28741172; bid=qh9RXgIGopg; viewed="26826540_24703171"; ap=1; ll="118172"; ct=y; _vwo_uuid_v2=8C5B24903B1D1D3886FE478B91C5DE97|7eac18658e7fecbbf3798b88cfcf6113; _pk_ref.100001.4cf6=%5B%22%22%2C%22%22%2C1522129522%2C%22https%3A%2F%%2Flink%3Furl%3DdnHqCRiT1HlhToCp0h1cpdyV8rB9f_OfOvJhjRPO3p1jrl764LGvi7gbYSdskDMh%26wd%3D%26eqid%3De15db1bb0000e3cd000000045ab9b6fe%22%5D; _pk_id.100001.4cf6=4e61f4192b9486a8.1485672092.10.1522130672.1522120744.; _pk_ses.100001.4cf6=*'}# r = requests.post(url,headers=headers)# r.raise_for_status()# html = etree.HTML(r.text)def get_html(current_url):time.sleep(2)r = requests.get(current_url, headers=headers)r.raise_for_status()return etree.HTML(r.text)def parse_html(content,writer):links = content.xpath("//*[@class='comment-item']")for link in links:content = link.xpath("./div[@class='comment']/p/text()")[0].strip()author = link.xpath("./div[@class='comment']/h3/span[@class='comment-info']/a/text()")[0].strip()time = link.xpath("./div[@class='comment']/h3/span[@class='comment-info']/span[@class='comment-time ']/text()")[0].strip()is_useful = link.xpath("./div[@class='comment']/h3/span[@class='comment-vote']/span[@class='votes']/text()")[0]print('content:', content)print('time:', time)print('is_useful:', is_useful)# detail = (author, time, is_useful, content)detail = (is_useful,content)writer.writerow(detail)if __name__ == '__main__':with open('douban.txt', 'a+', encoding='utf-8', newline='') as csvf:writer = csv.writer(csvf)writer.writerow(('作者', '时间', '有用数', '内容'))for page in range(0, 260, 20):url = '/subject/26662193/comments?start={}&limit=20&sort=new_score&status=P&percent_type='.format(page)r = get_html(url)parse_html(r,writer)

(二)结果截图:

数据分析

(一)结巴分词与matplotlib绘图

代码:

#encoding=utf-8import matplotlib.pyplot as pltfrom PIL import Imagefrom wordcloud import WordCloudimport jiebaimport numpy as np#读取txt格式的文本内容text_from_file_with_apath = open('douban.txt','rb').read()#使用jieba进行分词,并对分词的结果以空格隔开wordlist_after_jieba = jieba.cut(text_from_file_with_apath, cut_all = True)wl_space_split = " ".join(wordlist_after_jieba)#对分词后的文本生成词云# my_wordcloud = WordCloud().generate(wl_space_split)font = r'C:\Windows\Fonts\simfang.ttf'mask = np.array(Image.open('test_ciyun.jpg'))wc = WordCloud(mask=mask,max_words=3000,collocations=False, font_path=font, width=5800, height=2400, margin=10,background_color='black').generate(wl_space_split)default_colors = wc.to_array()plt.title("QR 3")plt.imshow(wc)plt.axis("off")plt.show()

(二)填坑:

1)中文能正常显示的设置:

font_path=font font = r’C:\Windows\Fonts\simfang.ttf’ **

2)背景图片设置未生效(个人感觉):

mask = np.array(Image.open(‘test_ciyun.jpg’))

3)字符编码问题解决:

text_from_file_with_apath = open(‘douban.txt’,’rb’).read()

分析

1 电影

2 没有

3 什么

4 你们

5 就是

6 爱情,男人,女人

7 自己,分手

时间有点晚了……..睡去罢…………..

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。