
Data acquisition
- or -
Post a project like this347
€250(approx. $293)
- Posted:
- Proposals: 25
- Remote
- #4317390
- OPPORTUNITY
- Awarded
PPH TOP Web Developer | Digital Marketing (SEO, Social Media Management, Facebook/Instagram Ads, Google Ads)|WordPress, Shopify, Wix, e-commerce, | Video Editing

Delivering High-Quality IT Services at Competitive Prices |Experienced Full Stack Web and App Developer |Android and IOS App Development|


Full-Stack eCommerce Developer | Shopify | WordPress & WooCommerce | Wix | Webflow | Joomla | BigCommerce | Drupal | SEO

Data Entry, Web Design, VA, Web Developer, Admin Support, Virtual Assistance, WordPress, Magento, eBay

Data scraping / data mining / data extraction / web scraping / scraping / data conversion / General admin support and more

software engineer|Full-Stack Software Developer| web developer| App developer | AI Engineer | WordPress
3971790198242748997844175381111687831183648171522511936190119902951390581831403427171
Description
Experience Level: Entry
We are a software company. For one of our projects we need to download
information from a website containing articles about medical topics.
The website contains cca. 10000 HTML pages of paged listing of articles
in Czech language. The list contains titles of articles, each title having
a link to the detail HTML page with the article text.
We need someone to produce wget and other scripts and download the titles of
all articles, parse the links from those titles, download the detailed pages
of the articles and distill the text that is shown in the page.
The titles as well as the detail pages mostly have the same structure so
this allows for an automated work. But it is not so in 100% cases, there may
be several types of structure so it may require some attention as to how
to distill the correct information.
The result of this work will be a set of static HTML files. You can view this
structure under
https://fomenot.com/z/dwld24/main.html
I.e. the result will contain the contents of the article separated into
paragraphs of normal text and captions (nothing else, no images or other
texts). We only want the main text of the article that is visible on the screen
for the user. No other text or html content.
Another result will be the raw HTML output for each of the detail pages
For accepting the output, we will do our check of the result. If we find errors,
we will give examples of these errors and we will expect the vendor to fix
all such errors in the result, not just those examples. If there are only a few
errors we may not be able to find them and it is ok. But if we find any we will
require correcting them.
We expect that the raw HTML files will be 100% error free (for these we will not
give examples, we just would demand fixing them). For the text-based results
we will give examples before demanding to fix them.
An example of such a source page you can find here:
https://www.idnes.cz/onadnes/zdravi/2
You can see a list of articles, each having a link leading to the detail
and then a paging control that can load more articles from the next page.
This is NOT the page we need to download but similar. Putting here the example
only that you understand what is the task.
Let us know if you could do it and for what price. We will provide the real links
to the selected candidate.
information from a website containing articles about medical topics.
The website contains cca. 10000 HTML pages of paged listing of articles
in Czech language. The list contains titles of articles, each title having
a link to the detail HTML page with the article text.
We need someone to produce wget and other scripts and download the titles of
all articles, parse the links from those titles, download the detailed pages
of the articles and distill the text that is shown in the page.
The titles as well as the detail pages mostly have the same structure so
this allows for an automated work. But it is not so in 100% cases, there may
be several types of structure so it may require some attention as to how
to distill the correct information.
The result of this work will be a set of static HTML files. You can view this
structure under
https://fomenot.com/z/dwld24/main.html
I.e. the result will contain the contents of the article separated into
paragraphs of normal text and captions (nothing else, no images or other
texts). We only want the main text of the article that is visible on the screen
for the user. No other text or html content.
Another result will be the raw HTML output for each of the detail pages
For accepting the output, we will do our check of the result. If we find errors,
we will give examples of these errors and we will expect the vendor to fix
all such errors in the result, not just those examples. If there are only a few
errors we may not be able to find them and it is ok. But if we find any we will
require correcting them.
We expect that the raw HTML files will be 100% error free (for these we will not
give examples, we just would demand fixing them). For the text-based results
we will give examples before demanding to fix them.
An example of such a source page you can find here:
https://www.idnes.cz/onadnes/zdravi/2
You can see a list of articles, each having a link leading to the detail
and then a paging control that can load more articles from the next page.
This is NOT the page we need to download but similar. Putting here the example
only that you understand what is the task.
Let us know if you could do it and for what price. We will provide the real links
to the selected candidate.
Simona A.
100% (9)Projects Completed
6
Freelancers worked with
5
Projects awarded
70%
Last project
14 Jul 2025
Czech Republic
New Proposal
Login to your account and send a proposal now to get this project.
Log inClarification Board Ask a Question
-

Could you please confirm if the website allows automated scraping or if there are specific rate limits or anti-bot measures to consider?
1118790
We collect cookies to enable the proper functioning and security of our website, and to enhance your experience. By clicking on 'Accept All Cookies', you consent to the use of these cookies. You can change your 'Cookies Settings' at any time. For more information, please read ourCookie Policy
Cookie Settings
Accept All Cookies
