
Python Developer to Review, Optimize & Run Web Crawler
- or -
Post a project like this$2.0k
- Posted:
- Proposals: 50
- Remote
- #4341054
- OPPORTUNITY
- Expired
♛ Most Trusted #1 Team |19+ years of expertise in Website, Mobile Apps, Desktop & Console Games. Wordpress, ReactJS, Shopify, Laravel, Python, React Native, Flutter, Unity, Unreal Engine and AR/VR




♛ PPH No. #1 ♛ 21Years of Experience in Web Development , Web Designing, Magento , Shopify, WordPress , API Integration, Full-Stack Ruby on Rails Developer,AngularJS / Node.js


Data Entry Expert |Web Scrapping |Content Writing | Excel/Googlesheet Ninja | Google Analytics| GA4|GTM|Python Expert

118711741287800128342117000533637221089279653131341168884812124184231302293158152809196




Description
Experience Level: Expert
We need an experienced Python developer with expertise in web scraping, machine learning, and automation to review, optimize, and run an existing domain crawler. The goal is to improve efficiency, accuracy, and reliability.
This project will be conducted in two stages:
Stage 1: Successfully Run the Existing Crawler
Your first task is to set up and successfully run the crawler to validate that you can:
- Install and configure all dependencies
- Execute the crawler on a dataset of domain names
- Generate valid output
- Troubleshoot any initial setup issues
Once this is successfully demonstrated, we will move to Stage 2.
Stage 2: Propose a Better Solution or Manage Ongoing Runs
- Depending on your expertise and analysis of the current crawler, you will either:
A) Propose & Implement Improvements (Rewrite or Optimize the Crawler)
- Identify inefficiencies and suggest a better architecture
- Improve performance & accuracy (e.g., async scraping, multiprocessing, better ML model)
- Ensure scalability if running on large datasets
- Refactor code for long-term maintainability
OR
B) Run & Maintain the Crawler Monthly
- Execute the crawler on a scheduled basis
- Monitor accuracy and adjust as needed
- Fix bugs & ensure stability
- Implement new features & improvements over time
- Provide regular reports on data quality and insights
About the current crawler
The current crawler is built in Python and classifies domain names based on their content (e.g., parked, active, expired). It utilizes:
- Requests / Aiohttp (Web scraping)
- Selenium (Browser automation)
- BeautifulSoup (HTML parsing)
- Langdetect / NLP (Language detection & text classification)
- XGBoost (Machine learning classification)
Ideal Candidate:
- Strong Python skills (async programming, OOP)
- Web scraping & automation expertise (requests, Selenium, BeautifulSoup)
- Machine learning experience (XGBoost, Scikit-learn, NLP)
- Familiarity with databases (SQL/NoSQL) for storing results
- Cloud/DevOps experience (if needed for large-scale deployment)
- Ability to work independently and ensure high-quality results
- High level in English
Project Timeline & Budget:
Deliverables expected within 2-4 weeks
Budget: Open to discussion based on experience & estimated work required.
This project will be conducted in two stages:
Stage 1: Successfully Run the Existing Crawler
Your first task is to set up and successfully run the crawler to validate that you can:
- Install and configure all dependencies
- Execute the crawler on a dataset of domain names
- Generate valid output
- Troubleshoot any initial setup issues
Once this is successfully demonstrated, we will move to Stage 2.
Stage 2: Propose a Better Solution or Manage Ongoing Runs
- Depending on your expertise and analysis of the current crawler, you will either:
A) Propose & Implement Improvements (Rewrite or Optimize the Crawler)
- Identify inefficiencies and suggest a better architecture
- Improve performance & accuracy (e.g., async scraping, multiprocessing, better ML model)
- Ensure scalability if running on large datasets
- Refactor code for long-term maintainability
OR
B) Run & Maintain the Crawler Monthly
- Execute the crawler on a scheduled basis
- Monitor accuracy and adjust as needed
- Fix bugs & ensure stability
- Implement new features & improvements over time
- Provide regular reports on data quality and insights
About the current crawler
The current crawler is built in Python and classifies domain names based on their content (e.g., parked, active, expired). It utilizes:
- Requests / Aiohttp (Web scraping)
- Selenium (Browser automation)
- BeautifulSoup (HTML parsing)
- Langdetect / NLP (Language detection & text classification)
- XGBoost (Machine learning classification)
Ideal Candidate:
- Strong Python skills (async programming, OOP)
- Web scraping & automation expertise (requests, Selenium, BeautifulSoup)
- Machine learning experience (XGBoost, Scikit-learn, NLP)
- Familiarity with databases (SQL/NoSQL) for storing results
- Cloud/DevOps experience (if needed for large-scale deployment)
- Ability to work independently and ensure high-quality results
- High level in English
Project Timeline & Budget:
Deliverables expected within 2-4 weeks
Budget: Open to discussion based on experience & estimated work required.

Pat M.
97% (31)Projects Completed
16
Freelancers worked with
17
Projects awarded
47%
Last project
2 May 2024
Australia
New Proposal
Login to your account and send a proposal now to get this project.
Log inClarification Board Ask a Question
-
There are no clarification messages.
We collect cookies to enable the proper functioning and security of our website, and to enhance your experience. By clicking on 'Accept All Cookies', you consent to the use of these cookies. You can change your 'Cookies Settings' at any time. For more information, please read ourCookie Policy
Cookie Settings
Accept All Cookies