1. Key points of crawler design

If you want to crawl a website in batches, you need to build a crawler framework yourself. Before building it, you need to consider several issues: avoid being blocked IP, image verification code recognition, data processing, etc.

The most common solution to blocking IP is to use proxy IP, in which the web crawler cooperates with ISPKEY HTTP proxy, responds very quickly, and the self-operated server nodes are spread all over the country, which can assist in completing the crawler task very well.

For relatively simple image verification codes, you can write recognition programs by yourself through the pytesseract library, which can only recognize simple photo-taking image data. For more complex ones such as sliding the mouse, slider, and dynamic image verification codes, you can only consider purchasing a coding platform for recognition.

As for data processing, if you find that the data you get is disrupted, the solution is to identify its disturbance pattern or obtain it through the source js code through python's execjs library or other execution js libraries to achieve data extraction.


2. Distributed crawler solution

If you want to realize batch crawling of data from a large site, the better way is to maintain 4 queues.

1. url task queue-it stores the url data to be crawled.

2. Original url queue - stores the data extracted from the crawled web pages but not yet processed. The processing is mainly to check whether it needs to be crawled, whether it is repeated, etc.

3. Original data queue - stores the crawled data without any processing.

4. Second-hand data queue - stores the data to be stored after the data processing process.


The above queues have 4 processes to monitor and execute tasks, namely:

1. Crawler crawling process - listen to the url task queue, crawl web page data and throw the captured original data into the original data queue.

2. URL processing process: listen to the original url queue, filter out abnormal urls and repeatedly crawled urls.

3. Data extraction process: listen to the original data queue, extract key data from the original data queue, including new urls and target data.

4. Data storage process: store the second-hand data in mongodb after sorting.

[email protected]