Skip to content

NikolaiT/scrapeulous

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cloud Crawler

This repository contains crawler functions used by scrapeulous.com.

If you want to add your own crawler function to be used within the crawling infrastructure of scrapeulous, please contact us at contact.

There are three different endpoints for the API:

  • /crawl - This endpoint allows you to get the HTML from any url. You may use a browser or a plain HTTP requests.
  • /serp - This endpoint allows you to scrape several different seach engines such as Google, Bing or Amazon.
  • /custom - This endpoint allows you to specify your own crawler logic in a custom Puppeteer class.

For the complete documentation, please visit the API docs page.

About

Cloud crawler functions for scrapeulous

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published