1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > python爬取网页内容post_python爬虫之使用POST抓取网页内容

python爬取网页内容post_python爬虫之使用POST抓取网页内容

时间:2024-05-28 09:56:42

相关推荐

python爬取网页内容post_python爬虫之使用POST抓取网页内容

首先先向小伙伴介绍一下HTTP中GET和POST

教小伙伴们写爬虫,通过POST获取网页内容

使用POST的原因:

GET是不安全的,因为在传输过程,数据被放在请求的URL中,而如今现有的很多服务器、代理服务器或者用户代理都会将请求URL记录到日志文件中,然后放在某个地方,这样就可能会有一些隐私的信息被第三方看到。如果数据量不大(GET方式数据量限制1K),不带有保护数据的情况下使用GET方式访问WEB服务器;但是如果数据量大,而且带有需要保护的数据时使用POST方式访问WEB服务器。如果浏览器传送服务器的数据量超过1K,应使用POST方式访问服务器,因为POST方式向服务器传送是数据时,会先把传送的数据打包发送到WEB服务器。

编写爬虫:

了解了使用POST的原因,相信小伙伴们应该都明白哪种网页会使用POST。如下就是 中国铁路12306 的一个注册界面,注册界面包含了身份证、手机号、真实姓名这些非常重要的个人隐私,因此不能使用GET,只能使用POST。

打开这个页面后,按下F12键后点开NetWork(也就是图片中黑色部分的颜色加深部分),在这里可以清晰的看到URL和Requests Method,从而也得知了该页面就是通过POST从服务器传输的。

url = "/otn/ip/sec"

复制Form Dat中的所有内容,在Jupyter中写成字典

# -*- coding: utf-8 -*-

import requests

payload = {

'sig': '_6ad953cbe26b56bd9e4004ba081eef91e0d6e526a8f6bbeb1c84f7793946edc2bc4639a912bacef125f91dcf7b69a96f149bfa98ff0b2b9e550a64f0e33ec087eefb32133677856e6771555a5d60be012b5a9f23cd65fe8bbfbfc55872132578f449a3f15e7e92fc729273c0dea849249ce84343fd7183e7715ada090dd3dcc4026eae4920a2fe4d4c9bb77bdc285795cb2cb863c9835cab5be140482299f3f1d323279f801b550b',

'data': '5G835R+BLir6khLbmwPRYJ6fyZus4Wv7dJy+ajcBs7EFRs35MULC7uZaqv01g3jB4mNxMseIkNs1Cf2VNBqVhy89ttOt91V11EL74Lrl1686N1qpyLoF/qMqI6ysE6Z1nT7fSNcjK3s8O3Eaw6bDcnnalNpdycwuFpTUOXArGjGQvJnmdvvMMZtsr2td2rY85RfHk1CWg5Z/AWIi9pYMwP4uhr06d6sW2MHhnohePirxeVT9qDqT97Bt0Knk1oHlKMutgCudBcdZNsb3G34m/Tsayy9dWZXnX6l9D3X4AZgTL+876E5OX8iwsKCEMj3gw/LUJzh4XsYrBknaqY7WGv1cbNlMbDkChvUIlZZePp0/1kdnefUrSTIT10E5cjhthgpHFAXaE4MKtdfgorI2ATeoSBfvbBw9r1TRdwiheMiRLHRU1dntF2fgQpGYBPBTmpdqiUI2Qi/hlxhXNw/qh2ZmlqZfXTMWLLNlNtKQ9qPAG/rId7HcX9NrWqzD2FFcgPcpqZQzqwX6vcI7nswrD/uV+Sh1XOBW0V2bHNsftpYB1mDhqiZO/2zeWATAFkGZXAIDRAEYrEuRuE6NfH69oGJthO9bp1X6nLO2OOp8xXabnGP7RpGlHaSR83IQLx3hyFkPZf3VTgs05uzivCo6jDuaG9krKkMauQL1VQT1Yo/JzFn1zSGeOCmdl/q5CbUNsyRFPozd1wFt5fWFaTE9IciEGJTrrnpMso3lUZ3ybGJ9ZGp+qRl0tsf6rNbCgBmj24tfxmeT8Ue7EQOkvWPJRdCrtGElVlaESNOQQ7WNit3YX8GwVqFh9H1NPruhZu27c+9TXVHYkaFa0iQzYKXsYHQ+1zNi9bgtpArLsDRvgqJ2lmz/PIJCcE59WfbHM4Su4m3/vuKlvsrtsRyZ7WpyYvl2n1chdR9wUdQrnGZtvLWnPX3QZ7yF311Z0rz/UG53N4HIooEbcVWVhbvmaZRg7ibVWixcuaW6dpmbq1TtlKRa6F+iv36JvcKSYM7kx41OkrjFdhVNoG1eXdTWewhi1U5R4z9szcOmpSs5QtEM9270wPLlW0HNV3xjJanZQvwq/NeeNLA/Ub3KyIs2aE57z9YPzQn7NHxdtgwTBLIDUDJxD75O52FqX05R4z9szcOm/0gzgb746Jxw7wgNs6RVCdS9u3pUS+Bkc0tIs5oCa+XjUO/FH/1OC3saXSRY5vmqGwENQIPtYWvH/saA5WddJiUIo/k/TP6Ih8oBfFFoHQFqPYLS2Gjc6Vy9HlKGKzIP1j3oPUDSOEyxtwVENILqEicT2tzXAVHw2eyo+NUZw5gmQ3rw7nlEl9fHAU5hV4BzSsEFuYgd3qvEf1Nt6IaDlB0SPkZiyLyEyaFmkrBXlX2bnN18PmtiTCfhCtSguHwtfjs+uAMRPEkt/jNpnExSkocK+PYcdCULQ+zzsrnJgMpt5fWFaTE9IeHeJVYHDfQkLd+8IRywJNKzGW7OPwDr04VCtd4S1ctWaK4GaMYc3X5orgZoxhzdfkBP6iIgp+wM24tfxmeT8UePuHUMGg3AY6y79YhjC7YVx9ufzitQEHfH25/OK1AQd8fbn84rUBB35hZmXgVkRWoFKfdVV0hJ7pEUsQiuTNRksjfM9JIJLQ0pnZf6uQm1DQl6U9Kpi6HYqwZ17NcXXQY32lR7fC1/6+aQuAWEpTeQe9iF76l7sZikeOtzqLtAioyoZ3reYcPaXf8exuxZTg23bVaIM5nUjqsGdezXF10GW2FL3oifAkGzGW7OPwDr035/boqpwaLc5RF2PBG3YjpjVUsxnomNIGHhp72r5s3ChoyZ2U0EdGSQBxdV/esUfHOuVJzyevY5JHSKD7nLsIx8drCXiQBsE2FcJ5kRyPAPjn4+RecLhpSsZVGKALlTBLUTKCj0zEDBWTqoMIzPOz7IWQ9l/dVOC7ul6i3KLzfHcO8IDbOkVQnKkodJNT7/FaVuUuDn5xhM4j2L62/J4I4JyQ4TW2ccLYVeVEQXccG3Zphjn6ONgfpZpDeALODIM0GiGDniKbNNP8VjE/DU0kd8YyWp2UL8KuNQ78Uf/U4LG7Kwt3c+5HlFYOlxyJBzBHDvCA2zpFUJwVYXdPtNge/jUO/FH/1OCy9S2WT7EEtPoWGO176PI7azGW7OPwDr0+uMKJUnHfMVnjnFPgucwtXKyIs2aE57z8pvZPRj6OkXP8VjE/DU0kfXMhQxA9b/XX1NPruhZu271U35PL5iQWL6Y5NbQWx/1i9LA3ZT7fS8j2W6Utzz6IV8YyWp2UL8KuNQ78Uf/U4LR3xBArXrZe5ZpDeALODIMzSOXKsuq3Ja7dNqlVE6mO+5iiaTXRrckqSea2VQmEyH0Dea7Yl7ivQPvXjVdOeyB1rNDVptj4OAj50zp/yc4Ab2nDiSicYuiWuQbTG66YdUEo5m4x7UWUNBv3Rn0D2VHpXjxpxwkprXr+cItRcztC0vh2D8Ng5f+wdyASCTvurRGW/FMCVhRTTI+xkILt7xf+hroL+iNBdz25PNGDV10gkZIbVQ9E8T3h7DVfzupP4JkTcpFwzQbOZqcrbsZ7iajSXtOIAMANrerg78SPDK/iQ=',

'action': 'r'

}

url = "/otn/ip/sec"

res = requests.post(url,data = payload)

print(res.text)

这样编写确实可以从服务器获取到内容,但是出现了乱码,我们需要解决它。

url = "/otn/ip/sec"

raw_data = urllib.request.urlopen(url).read()

charset = chardet.detect(raw_data) # {'confidence': 0.99, 'encoding': 'utf-8'}

encoding = charset['encoding']

res.encoding = encoding

'''其原理是获取页面中的coding,然后将获取到的coding赋给res.coding,也就是我们获取到的内容,从而解决乱码问题'''通过POST爬取到的内容

补充:urllib2在python3中换成了urllib.request

“ # -*- coding: utf-8 -*- ”必不可少,少了仍然会乱码

附完整代码:

# -*- coding: utf-8 -*-

import requests

payload = {

'sig': '_6ad953cbe26b56bd9e4004ba081eef91e0d6e526a8f6bbeb1c84f7793946edc2bc4639a912bacef125f91dcf7b69a96f149bfa98ff0b2b9e550a64f0e33ec087eefb32133677856e6771555a5d60be012b5a9f23cd65fe8bbfbfc55872132578f449a3f15e7e92fc729273c0dea849249ce84343fd7183e7715ada090dd3dcc4026eae4920a2fe4d4c9bb77bdc285795cb2cb863c9835cab5be140482299f3f1d323279f801b550b',

'data': '5G835R+BLir6khLbmwPRYJ6fyZus4Wv7dJy+ajcBs7EFRs35MULC7uZaqv01g3jB4mNxMseIkNs1Cf2VNBqVhy89ttOt91V11EL74Lrl1686N1qpyLoF/qMqI6ysE6Z1nT7fSNcjK3s8O3Eaw6bDcnnalNpdycwuFpTUOXArGjGQvJnmdvvMMZtsr2td2rY85RfHk1CWg5Z/AWIi9pYMwP4uhr06d6sW2MHhnohePirxeVT9qDqT97Bt0Knk1oHlKMutgCudBcdZNsb3G34m/Tsayy9dWZXnX6l9D3X4AZgTL+876E5OX8iwsKCEMj3gw/LUJzh4XsYrBknaqY7WGv1cbNlMbDkChvUIlZZePp0/1kdnefUrSTIT10E5cjhthgpHFAXaE4MKtdfgorI2ATeoSBfvbBw9r1TRdwiheMiRLHRU1dntF2fgQpGYBPBTmpdqiUI2Qi/hlxhXNw/qh2ZmlqZfXTMWLLNlNtKQ9qPAG/rId7HcX9NrWqzD2FFcgPcpqZQzqwX6vcI7nswrD/uV+Sh1XOBW0V2bHNsftpYB1mDhqiZO/2zeWATAFkGZXAIDRAEYrEuRuE6NfH69oGJthO9bp1X6nLO2OOp8xXabnGP7RpGlHaSR83IQLx3hyFkPZf3VTgs05uzivCo6jDuaG9krKkMauQL1VQT1Yo/JzFn1zSGeOCmdl/q5CbUNsyRFPozd1wFt5fWFaTE9IciEGJTrrnpMso3lUZ3ybGJ9ZGp+qRl0tsf6rNbCgBmj24tfxmeT8Ue7EQOkvWPJRdCrtGElVlaESNOQQ7WNit3YX8GwVqFh9H1NPruhZu27c+9TXVHYkaFa0iQzYKXsYHQ+1zNi9bgtpArLsDRvgqJ2lmz/PIJCcE59WfbHM4Su4m3/vuKlvsrtsRyZ7WpyYvl2n1chdR9wUdQrnGZtvLWnPX3QZ7yF311Z0rz/UG53N4HIooEbcVWVhbvmaZRg7ibVWixcuaW6dpmbq1TtlKRa6F+iv36JvcKSYM7kx41OkrjFdhVNoG1eXdTWewhi1U5R4z9szcOmpSs5QtEM9270wPLlW0HNV3xjJanZQvwq/NeeNLA/Ub3KyIs2aE57z9YPzQn7NHxdtgwTBLIDUDJxD75O52FqX05R4z9szcOm/0gzgb746Jxw7wgNs6RVCdS9u3pUS+Bkc0tIs5oCa+XjUO/FH/1OC3saXSRY5vmqGwENQIPtYWvH/saA5WddJiUIo/k/TP6Ih8oBfFFoHQFqPYLS2Gjc6Vy9HlKGKzIP1j3oPUDSOEyxtwVENILqEicT2tzXAVHw2eyo+NUZw5gmQ3rw7nlEl9fHAU5hV4BzSsEFuYgd3qvEf1Nt6IaDlB0SPkZiyLyEyaFmkrBXlX2bnN18PmtiTCfhCtSguHwtfjs+uAMRPEkt/jNpnExSkocK+PYcdCULQ+zzsrnJgMpt5fWFaTE9IeHeJVYHDfQkLd+8IRywJNKzGW7OPwDr04VCtd4S1ctWaK4GaMYc3X5orgZoxhzdfkBP6iIgp+wM24tfxmeT8UePuHUMGg3AY6y79YhjC7YVx9ufzitQEHfH25/OK1AQd8fbn84rUBB35hZmXgVkRWoFKfdVV0hJ7pEUsQiuTNRksjfM9JIJLQ0pnZf6uQm1DQl6U9Kpi6HYqwZ17NcXXQY32lR7fC1/6+aQuAWEpTeQe9iF76l7sZikeOtzqLtAioyoZ3reYcPaXf8exuxZTg23bVaIM5nUjqsGdezXF10GW2FL3oifAkGzGW7OPwDr035/boqpwaLc5RF2PBG3YjpjVUsxnomNIGHhp72r5s3ChoyZ2U0EdGSQBxdV/esUfHOuVJzyevY5JHSKD7nLsIx8drCXiQBsE2FcJ5kRyPAPjn4+RecLhpSsZVGKALlTBLUTKCj0zEDBWTqoMIzPOz7IWQ9l/dVOC7ul6i3KLzfHcO8IDbOkVQnKkodJNT7/FaVuUuDn5xhM4j2L62/J4I4JyQ4TW2ccLYVeVEQXccG3Zphjn6ONgfpZpDeALODIM0GiGDniKbNNP8VjE/DU0kd8YyWp2UL8KuNQ78Uf/U4LG7Kwt3c+5HlFYOlxyJBzBHDvCA2zpFUJwVYXdPtNge/jUO/FH/1OCy9S2WT7EEtPoWGO176PI7azGW7OPwDr0+uMKJUnHfMVnjnFPgucwtXKyIs2aE57z8pvZPRj6OkXP8VjE/DU0kfXMhQxA9b/XX1NPruhZu271U35PL5iQWL6Y5NbQWx/1i9LA3ZT7fS8j2W6Utzz6IV8YyWp2UL8KuNQ78Uf/U4LR3xBArXrZe5ZpDeALODIMzSOXKsuq3Ja7dNqlVE6mO+5iiaTXRrckqSea2VQmEyH0Dea7Yl7ivQPvXjVdOeyB1rNDVptj4OAj50zp/yc4Ab2nDiSicYuiWuQbTG66YdUEo5m4x7UWUNBv3Rn0D2VHpXjxpxwkprXr+cItRcztC0vh2D8Ng5f+wdyASCTvurRGW/FMCVhRTTI+xkILt7xf+hroL+iNBdz25PNGDV10gkZIbVQ9E8T3h7DVfzupP4JkTcpFwzQbOZqcrbsZ7iajSXtOIAMANrerg78SPDK/iQ=',

'action': 'r'

}

url = "/otn/ip/sec"

res = requests.post(url,data = payload)

raw_data = urllib.request.urlopen(url).read()

charset = chardet.detect(raw_data) # {'confidence': 0.99, 'encoding': 'utf-8'}

encoding = charset['encoding']

res.encoding = encoding

print(res.text)CSDN-个人空间​Sina Visitor System​

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。