Open links in new tab
  1. Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper.

    Crawl4AI is the #1 trending open-source web crawler on GitHub. Your support keeps it independent, innovative, and free for the community — while giving you direct access to premium benefits.

  2. crawler · GitHub Topics · GitHub

    Oct 12, 2017 · Crawler A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically …

  3. GitHub - NanmiCoder/CrawlerTutorial: 爬虫入门、爬虫进阶、高级爬虫

    爬虫入门、爬虫进阶、高级爬虫. Contribute to NanmiCoder/CrawlerTutorial development by creating an account on GitHub.

  4. NanmiCoder/NewsCrawler: 多平台新闻 & 内容爬虫集合 - GitHub

    多平台新闻 & 内容爬虫集合 | Multi-platform News & Content Crawler Suite 支持:微信公众号、Twiiter、今日头条、网易新闻、搜狐新闻、腾讯新闻、Naver、Detik、Quora 等主流平台 Supports crawling …

  5. Lightnovel Crawler - GitHub

    Lightnovel Crawler Lightnovel Crawler downloads web novels and similar fiction from many online reading sites and saves them as e-book so you can read offline on a phone, tablet, or e-reader.

  6. GitHub - elastic/crawler

    Elastic Open Crawler is a lightweight, open code web crawler designed for discovering, extracting, and indexing web content directly into Elasticsearch. This CLI-driven tool streamlines web content …

  7. A next-generation crawling and spidering framework - GitHub

    Oct 17, 2019 · Katana is a fast crawler focused on execution in automation pipelines offering both headless and non-headless crawling. Usage: ./katana [flags] Flags: INPUT: -u, -list string[] target url / …

  8. GitHub - dataabc/weibo-crawler: 新浪微博爬虫,用python爬取新浪微 …

    Jul 12, 2019 · 新浪微博爬虫,用python爬取新浪微博数据,并下载微博图片和微博视频. Contribute to dataabc/weibo-crawler development by creating an account ...

  9. GitHub - BruceDone/awesome-crawler: A collection of awesome web …

    A collection of awesome web crawler,spider and resources in different languages.

  10. GitHub - fredwu/crawler: A high performance web crawler / scraper in ...

    Crawler A high performance web crawler / scraper in Elixir, with worker pooling and rate limiting via OPQ.