1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > php与python实现的线程池多线程爬虫功能实例详解

php与python实现的线程池多线程爬虫功能实例详解

时间:2023-10-24 18:38:39

相关推荐

php与python实现的线程池多线程爬虫功能实例详解

后端开发|php教程

python,php,爬虫

后端开发-php教程多线程爬虫可以用于抓取内容了这个可以提升性能了,这里我们来看php与python 线程池多线程爬虫的例子,代码如下:

源码共享论坛,ubuntu记笔记软件,cxf部署到tomcat报错,神马爬虫在线,php独立环境搭建,云南seo关键词优化推广怎么做lzw

php例子

全套主播源码,ubuntu 检查内存,更改tomcat默认页面,爬虫应对登录,php bug,成都抖音seo优化多少钱lzw

url = $url;}public function run(){$ch = $this->worker->getConnection();curl_setopt($ch, CURLOPT_URL, $this->url);$page = curl_exec($ch);$info = curl_getinfo($ch);$error = curl_error($ch);$this->deal_data($this->url, $page, $info, $error);$this->result = $page;}function deal_data($url, $page, $info, $error){$parts = explode(".", $url);$id = $parts[1];if ($info[http_code] != 200){$this->show_msg($id, $error);} else{$this->show_msg($id, "OK");}}function show_msg($id, $msg){echo $id."\t$msg\n";}public function getResult(){return $this->result;}protected $url;protected $result;}function check_urls_multi_pthreads(){global $check_urls; //定义抓取的连接$check_urls = array( \ => "xx网",);$pool = new Pool(10, "Connect", array()); //建立10个线程池foreach ($check_urls as $url => $name){$pool->submit(new Query($url));}$pool->shutdown();}check_urls_multi_pthreads();python 多线程def handle(sid)://这个方法内执行爬虫数据处理passclass MyThread(Thread):"""docstring for ClassName"""def __init__(self, sid):Thread.__init__(self)self.sid = siddef run():handle(self.sid)threads = []for i in xrange(1,11):t = MyThread(i)threads.append(t)t.start()for t in threads:t.join()

python 线程池爬虫:

不倒翁开奖器源码,ubuntu建立web服务,python爬虫blob图片,hproxy php,seo是王道lzw

from queue import Queuefrom threading import Thread, Lockimport urllib.parseimport socketimport reimport timeseen_urls = set([/])lock = Lock()class Fetcher(Thread): def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() def run(self): while True:url = self.tasks.get()print(url)sock = socket.socket()sock.connect((localhost, 3000))get = GET {} HTTP/1.0\r\nHost: localhost\r\n\r\n.format(url)sock.send(get.encode(ascii))response = b\chunk = sock.recv(4096)while chunk: response += chunk chunk = sock.recv(4096)links = self.parse_links(url, response)lock.acquire()for link in links.difference(seen_urls): self.tasks.put(link)seen_urls.update(links)lock.release()self.tasks.task_done() def parse_links(self, fetched_url, response): if not response:print(error: {}.format(fetched_url))return set() if not self._is_html(response):return set() urls = set(re.findall(r\(?i)href=["]?([^\s"]+)\,self.body(response))) links = set() for url in urls:normalized = urllib.parse.urljoin(fetched_url, url)parts = urllib.parse.urlparse(normalized)if parts.scheme not in (\, http, https): continuehost, port = urllib.parse.splitport(loc)if host and host.lower() not in (localhost): continuedefragmented, frag = urllib.parse.urldefrag(parts.path)links.add(defragmented) return links def body(self, response): body = response.split(b\ \n\r\n, 1)[1] return body.decode(utf-8) def _is_html(self, response): head, body = response.split(b\ \n\r\n, 1) headers = dict(h.split(: ) for h in head.decode().split(\ \n)[1:]) return headers.get(Content-Type, \).startswith( ext/html)class ThreadPool: def __init__(self, num_threads): self.tasks = Queue() for _ in range(num_threads):Fetcher(self.tasks) def add_task(self, url): self.tasks.put(url) def wait_completion(self): self.tasks.join()if __name__ == \__main__: start = time.time() pool = ThreadPool(4) pool.add_task("/") pool.wait_completion() print({} URLs fetched in {:.1f} seconds.format(len(seen_urls),time.time() - start))

总结:

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。