Aws Projects
Looking for freelance Aws jobs and project work? PeoplePerHour has you covered.
AWS Expert
The AWS person must be able to perform the following tasks 1. Supernal-AI-Hub-App and Account Settings Platform (ASP): - Hosted on existing Amazon EC2 instance. - Ensure granular control over the hosting environment. - Configure security groups, IAM roles, and security measures for sensitive data handling. - Implement monitoring and management tools like AWS CloudWatch for performance monitoring and alerts. - Establish seamless communication between EC2 and EKS environments through appropriate networking configurations (e.g., VPC peering, API Gateway, PrivateLink). 2. Launchpad App: - Utilize Elastic Load Balancing to distribute traffic across multiple EC2 instances. - Configure Auto Scaling groups for dynamic scaling based on traffic demands. - Implement health checks and alarms for fault tolerance and high availability. - Set up security measures such as security groups, IAM roles, and SSL certificates for secure communication. 3. Vendor Success Platform (VSP), Customer Success Platform (CSP), and Product Information Management (PIM) Application: - Move these three applications to a single Amazon EKS cluster. - Utilize Kubernetes for container orchestration, enabling features like self-healing, automated rollouts, scaling, etc. - Configure Kubernetes services for each application within the EKS cluster. - Implement proper resource management and scaling policies for efficient resource utilization. - Set up monitoring and logging solutions compatible with Kubernetes (e.g., Prometheus, Fluentd, Elasticsearch, Kibana). - Ensure seamless integration between EKS services and the EC2-hosted applications. - Implement security measures such as IAM roles for service accounts, Kubernetes RBAC, network policies, etc., to secure containerized applications. - Apply consistent security policies across EC2 and EKS environments, including encryption, access controls, and compliance measures. - Implement CI/CD pipelines for automated deployments and updates of containerized applications. - Configure DNS routing or service discovery mechanisms for accessing the PIM application from VSP and CSP. 4. Overall Considerations: - Implement unified monitoring and management tools compatible with both EC2 and EKS environments for comprehensive insights. - Apply consistent security and compliance measures across both environments using AWS services such as IAM, Security Groups, AWS Shield, etc. - Perform thorough testing and validation of the hybrid setup to ensure seamless operation and minimal disruption during migration. - Document the setup, configurations, and operational procedures for future reference and troubleshooting. - Continuously monitor and optimize resource usage, security posture, and performance of the hosted applications on AWS.
8 days ago14 proposalsRemotePHP Developer with AWS Experience
In order to handle bounces and complaints, we have recently implemented for our Amazon SES the following: a) A script for Handling Bounces and Complaints. b) We have also set up the SQS / SNS and implemented PHP script for checking the SQS topic in case of high bounces /complaints rate and in that case send an email / text. c) In addition, we have also setup a cron for running the script. We noticed that there have been Bounces and Complaints again. Now we require PHP developer, preferably with Laravel 7.x experience & AWS experience to check why the above solution + automatic email block solutions haven’t worked and what we can do to address this issue.
16 days ago20 proposalsRemoteURGENT, I need expert about AWS
Hello. I develop e-commerce site in AWS. But It doesn't work well. The Problem is "error establishing a database connection". I want to developer have enough experienced in AWS.
22 days ago29 proposalsRemoteMysql developer
I need a mysql developer assist with the development of a mysql database. I am developing a database of New Zealand lawyers. The database contains public details about them including email address and date of admission to the profession. There are about 16000. I am looking for someone to help me develop the database. A good knowledge of mysql is essential. I am AWS Lightsail Database on the web, and phpmyadmin on a local machine. It is for a php website.
17 days ago100 proposalsRemoteRun a python project an tests inside docker
Hi there I have a Python project that uses Flask and AWS and stuff like that. I want to run all setups inside a docker image and run unit tests. Project settings are provided using pyproject.tml and poetry You will connect to my macbook using anydesk for this task
24 days ago12 proposalsRemoteAmazon advertising SaaS tool
We are seeking a skilled freelancer to develop an Amazon advertising SaaS tool. The tool should be capable of connecting to Amazon accounts, managing rules for automated campaign execution, generating reports, and integrating with Amazon LWA. The preferred language for development is Python and Django, with the server hosted on an Amazon EC2 instance. Key functionalities include: Amazon account connection with LWA: The tool should establish a secure connection with Amazon's LWA (Lightweight Web Application) to access advertising data. Rule management: Users should have the ability to create and manage various rules to automate campaign execution. Rules can include criteria such as budget allocation, ad scheduling, targeting, and more. Reporting: The tool should provide comprehensive reporting capabilities to sync Amazon advertising data. Reports should include metrics such as impressions, clicks, conversions, and ROI. We are looking for a freelancer with experience in developing SaaS tools, specifically in the Amazon advertising ecosystem. The ideal candidate should have a strong understanding of Python, Django, and AWS technologies. Additionally, they should have experience with Amazon LWA and be familiar with its API. The project duration is estimated to be 30 days, and we expect the freelancer to work remotely during this time. We will provide access to the necessary AWS resources and provide clear instructions on the requirements and functionalities of the tool. If you are a skilled freelancer with the required expertise and are interested in this project, please submit your proposal. We look forward to reviewing your proposal and discussing the project further with you.
11 days ago13 proposalsRemoteopportunity
Create CRUD endpoints for our UI components in .NET Core Web API
We've our Admin angular project and it requires API endpoints (CRUD) to manage the data flow. You can start with our existing project and follow the same pattern. Additionally require you to create CacheRepository pattern to manage the cache for CRUD ops. Our Database is NoSQL but the queries are written in SQL. These are quite simple and Ill provide json document and you'll need to create endpoint to manage it. (We follow flat structure and doesn't require complex joins). 1. Blog APIs 2. Home Page API's 2.1. Trending Products API's 2.2. Deal API's 2.3. Ads API's 2.4. Brands API's 2.5. Trending categories API's 3. Category Page API's 3.1. Ads 3.2. Sub-categories 4. Sale API's 5. AWS S3 bucket Image management
6 days ago21 proposalsRemoteI need a Website for job posting and accepeting CVs.
I am seeking a skilled developer to build a robust job posting and recruitment website. The site should be fully responsive, allowing job seekers to search listings and employers to post new opportunities. Key features include: employers can create a free profile to advertise openings while more advanced paid plans unlock extra posting volume and application management tools. Candidates should be able to upload CVs, apply directly to posts, and track application status. User accounts with login authentication needs to be implemented for both employers and candidates for secure access to customized dashboards. Payment integration is important to facilitate transactions as employers upgrade plans or candidates pay for services. The site must be securely hosted on reputable platforms like GoDaddy or AWS with SSL certification ensuring privacy and safety of user data. Developed using current best practices and coding standards, the finished product is hoped to efficiently connect talent with jobs through an intuitive experience for all users. Solid coding, testing, and maintenance support on an ongoing basis will also be appreciated to keep the site running optimally. Please provide portfolio links showcasing relevant work and outline timelines to complete this impactful project. I look forward to reviewing qualified proposals.
2 days ago22 proposalsRemoteAndroid & Ios apps development and maintenance
We are seeking an experienced mobile app developer to help develop and maintain Android and iOS applications for our company. (Note: the candidate must have an EU tax ID and his/her address must be in a EU country) The ideal candidate will have a minimum of 2 years professional experience building fully-featured cross-platform apps using native Adnroid & Ios SDKs and/or React Native framework. The company has sites in Germany, Bulgaria and Greece. This job position concerns a remote thesis. The successful developer will be highly skilled in React Native and cross-platform best practices. They must be able to work remotely and independently, take initiative when needed and ensure code quality and documentation is kept to a high standard. Experience managing the full mobile development lifecycle from concept to deployment is preferable. Most importantly, the candidate should have a collaborative spirit and desire to help our growing company achieve its goals through high-quality, user-focused application development. - Strong English communication skills, both written and verbal, are essential as the developer will also act as a liaison between our technical and non-technical teams. - Key responsibilities will include implementing new features, fixing bugs, optimizing performance and overseeing the maintenance and scaling of our existing codebase. - Experience with AWS serverless technologies like AWS Lambda would be beneficial as some functionality relies on cloud integrations. - Knowledge of the Greek, German languages are preferable but not mandatory as most technical discussions will be conducted in English.
a month ago44 proposalsRemoteDatabase Management System (DBMS) Migration Specialist
Job Description: We are seeking a skilled Database Management System (DBMS) Migration Specialist to assist in transitioning from Google Sheets to a more robust solution for managing production data. The successful candidate will be responsible for recommending and implementing the migration to a suitable DBMS, such as MySQL or PostgreSQL for structured data, or MongoDB for semi-structured data. Integration with data integration tools like Apache NiFi or Talend will be required to streamline the migration process. Additionally, expertise in utilizing BI tools like Tableau or Power BI for data analysis and visualization is essential to enhance decision-making processes. Consideration of cloud platforms such as AWS or GCP for storage solutions will be part of the project scope. Skills Required: Proficiency in database management systems, including MySQL, PostgreSQL, and/or MongoDB. Experience with data integration tools such as Apache NiFi or Talend. Familiarity with BI tools like Tableau or Power BI for data analysis and visualization. Knowledge of cloud platforms such as AWS or GCP for storage solutions. Expertise in frontend frameworks like React.js or Angular, and backend technologies like Node.js or Django. Experience Level: Intermediate to Expert Budget: To be discussed based on experience and project scope. Timeline: Flexible, with preference for timely completion. Communication Preferences: Regular communication via email, messaging platforms, or video calls as needed. Additional Requirements: Ability to recommend and implement the most suitable DBMS solution based on project requirements. Experience in migrating data from Google Sheets to the chosen DBMS. Proficiency in setting up data integration pipelines and configuring BI tools for effective data analysis. Familiarity with cloud storage solutions and best practices for data management. Strong problem-solving skills and attention to detail. Portfolio or Sample Work: Interested candidates are encouraged to provide examples of previous DBMS migration projects or relevant experience. Terms and Conditions: Terms and conditions will be discussed and agreed upon before the start of the project. Call to Action: Interested candidates are invited to submit their proposals, including relevant experience and proposed approach for the project. Review Process: Proposals will be reviewed based on qualifications, experience, and proposed solutions. Shortlisted candidates may be contacted for further discussion.
18 days ago15 proposalsRemote
Past "Aws" Projects
opportunity
Front End and Back End Swift / Python Development for iOS App
Hi, I am looking to develop an app that connects to a raspberry pi device and sets up wifi on the device via bluetooth, and then gets it the raspberry pi to continuously scan for specified bluetooth devices specified by the app. If they are near the raspberry pi films a video and uploads it to the cloud where it is analysed (this code I have written - both the filming and uploading but not the user identification). Video analysis scripts are then undertaken on the video (code I can right) and data loaded into a database. The app then reads and displays this data hosted in an AWS database. Milestone 1: Frontend -Signin & Signup, Profile -Device management and display data -Notifications and the other functions -Shared through testflight Milestone 2: Backend & Database Interaction -Authentification -Database Design -Infrastructure as code on AWS Milestone 3: WiFi Configuration via App & Bluetooth Device Scanning -WiFi Configuartion -Device scan and data display -Video Recording Trigger Milestone 5: Testing & Debugging Milestone 6: AppStore Upload The UI is here and the flow is attached: https://www.figma.com/file/NcQG1IBWYS3TH0PzcmYFJ7/tinkle?type=design&node-id=0%3A1&mode=dev&t=EjRnGeqaonektEtr-1 I have three repos: tinkle_infrastructure - this host's the infrastructure as code + models that analyse the images and write to the database (currently not writing and all algos not complete). I imagine this is where the apis for the app will be hosted too. It would be good to have some help with deploying the models as infrastructure as code on a container. tinkle_device - this hosts the code on the raspberry pi - any updates to this repo are automatically added to the device, this is set up and working, just needs the identification piece done. tinkle_app - this is the repo for the iOS app When submitting a proposal please provide any relevant experience and also a breakdown of costs and a project plan. Thanks, John
Setting Up AWS Backend for Mobile App
I need someone that knows how to set up/configure an AWS backend and do the first business logic code so I have something to get started. With the app photos can be taken. These photos need to be stored in AWS, together with some metadata like user ID, GPS location, and time. After doing research I've decided: • AWS • Java for business logic (if appropriate using AWS Lambda) • ItelliJ as IDE • RDS PostgreSQL database with PostGIS • S3 for photo storage • AWS Amplify with Cognito app user accounting • REST API I'm looking for someone that knows the above well and can assist me to set things up and explain how it all works. I am an experienced software engineer but don't have much backend experience and need to get started. To be considered for this project I need a good estimate of how much time this will take you. I'm open to fixed price offers too. If you find things unclear to give a good estimate or fixed price, ask me for clarification of more details.
opportunity
Local Council Chatbot utilising Llama2 and dataset of PDF docs.
Full stack developer with relevant experience in AWS services and LLM deployment. Description Develop an MVP that provides a chat interface allowing users to query a dataset of local council documents, which will variously include minutes and policy documents. A dataset that contains all information relating to the purpose, policies, news, information, and decision making by that council. The dataset would contain approximately 100 PDF documents, and the chatbot would return meaningful and coherent answers to user prompts, while providing reference links to documents that information in the response is taken from. The client acknowledges the current limitations of LLMs in returning responses from queries across multiple documents, especially given current token limits and processing cost restrictions. A developer is sought that can leverage techniques to embed metadata in the text, allowing techniques such as RAG to extract snippets of data from multiple documents relating to the query and collate them into a response to the user, while adhering to token limits. Objective Develop an automated semantic text analysis pipeline that processes and analyses textual data extracted from documents using Llama2. This pipeline enriches text with metadata for deeper insights and enables semantic search capabilities through a user-friendly interface. This stage of the project is for a MVP system, leveraging AWS services such as Textract for text extraction, a text categorising stage with a simple to use GUI, all-mpnet-base-v2 for embedding, and Postgres with a vector extension. This job posting is for the MPV stage only, but we must be mindful of the stage two development and facilitate rapid and straightforward scalability in any stage one MPV processes. System Overview The solution encompasses AWS services for storage and processing, a custom interface for metadata enrichment, all-mpnet-base-v2 for generating text embeddings, Postgres and a vector extension for efficient storage and retrieval of vectors, and a custom-built web interface for user interaction. RAG will be implemented with a broad a context as possible to the model across a large document set. Phase 1: MVP Stage 1. Document Storage and Processing Trigger Tool: Amazon S3. Process: Upload documents (PDFs initially) to designated S3 buckets, documents will be renamed in accordance with a set naming convention and details of the document entered into the database. This triggers the subsequent text extraction process. For test purposes the uploads will be made manually, and at later stages a web scraper will be added that automatically places PDF documents into relevant S3 buckets. 2. Text Extraction - Tool: AWS Textract. - Process: Text is extracted from uploaded PDF documents and temporarily stored in A3 buckets to facilitate further processing. 3. Text Enrichment Developer to advise on best method of adding labels / categories to the text, via an easy to use interface. Labels to be added at a granular level to allow the return of text snippets from within the chunks of data, but with relevant metadata. The purpose of this is to provide context to the LLM in formulating responses from a broad range of documents without exceeding the token limit. 4. Text Vectorization • Embedding tool: all-mpnet-base-v2 • LLM: Amazon SageMaker (using LLaMA 2). Process: The text is processed with LLaMA 2 to generate vector embeddings, capturing semantic information for advanced analysis and search functionalities. 5. Vector Storage Tool: Postgres with a vector extension Process: Text vectors are stored in the database, allowing for efficient management and retrieval of vectorized data for semantic searches. 6. Front-end Web Application and Search Functionality Front-end Technology: • React.js. Key Features: • Semantic search input and results display. • email input field for collecting contact information for marketing purposes, forwarding to the client's email address. • Homepage containing descriptive marketing text. 3 pages total: home page, interaction page, contact page, plus a pop up with GDPR info. Graphics provided as template guidance. Back-end Technology: • Python with FastAPI. 7. Fine Tuning Allow for fine tuning based on a series of questions and responses to be provided by the client, until such point that coherent responses to queries are achieved. Phase 2: Full Automation and Scaling Beyond the scope of this job. Notes: The developer is to provide guidance and feedback on the capabilities of the technologies and is free to provide their own guidance and suggestions. However, the functionality of the system in providing coherent responses based on text snippets drawn from a large dataset is both the challenge and the absolute requirement. Please only bid with your full and final price. Placeholders will not be accepted. Completion with approximately two weeks. Please respond by explaining how you would handle the text enrichment?
opportunity
AI app to make crests from written descriptions
We require a custom trained AI (GPT powered or otherwise) to create family crest designs based on text prompts from a Google sheet/CSV. Example attached of some family crests. The Google sheet/csv provided will contain thousands of prompts describing the appearance of each crest (examples attached). We need the output to always be in the same template as the attached designs. i.e you can see the shape of the shield, text banners, helmet and decorations should always stay the same shape and position. The elements that change on each design are the icons on the shield, their position on the shield, the color of the shield and side decorations and the icons above the helmet. There is a fixed range of colors (7 colors) that can appear. We will provide Hex references for these. There is also a fixed number of icons that typically appear in these crests designs e.g a lion, a sword, a castle etc (~300 in total). We can provide all icons as png files that could appear on a crest. Ideally the AI/app would create the icons on the fly based on some training data but we can provide all icon files directly if needed so that output icons are consistent. Output design of crests to be a fixed dimension PNG high res file. Ideally the model can also output some mocks of the crest on different fixed items such as on a piece of paper, in a frame on wall etc… (always the same set of mocks) Output files to uploaded to AWS with a consistent naming convention. This can be a semi-manual process once files are generated and exported and confirmed to meet quality standards but we’d like this process to be as easy as possible.
AWS Expert required
we need a developer who can help with the following issue: we have a web site which allows users to upload videos to be passed to a editor. In our S3 account each project creates an id in this case referring to project_638 The problem is there is a script by another dev (no longer available) which creates a zip of these files when the project originally gets submitted in the website which can be download by a user from their account on the website It looks like however that the zip folder is not containing all the files for possibly 2 reasons. 1. File size issues? We think unlikely but not sure. 2. Each project can have more videos added to it after original submission when we think the zip file is created but any new videos do not get put into this zip folder. We cant really do much with the system as its a live site The php code to call this zip folder is as follows so we are ideally looking for a solution to make any new videos and images get include an the zip file at all times {{ project_id }}/project_{{ project_id }}-videos.zip reallymadeup2.s3.us-east-2.amazonaws.com/project_{{ project_id }}/project_{{ project_id }}-videos.zip so essentially we need a way that when the editor in his website admin clicks download all it will get create a zip of everything currently stored in that bucket for that user thanks
AWS Solutions Architect
We are seeking a highly skilled and experienced AWS Solutions Architect to join our team on a freelance basis. As an AWS Solutions Architect, you will play a key role in designing, implementing, and managing our cloud infrastructure to ensure optimal performance, security, and scalability. You will need to: 1- Architect and implement secure, scalable, and high-performance cloud infrastructure using AWS services. 2- Conduct architecture reviews to ensure solutions align with best practices and security standards. 3- Provide guidance on optimizing costs and resources through effective use of AWS services. 4- Troubleshoot and resolve issues related to AWS infrastructure and services. 5- IoT experience is a plus.
AWS website migration from Azure to AWS
migrate site from Azure to AWS .we are currently having userbase of 20000 users on our site
opportunity
Automated Semantic Text Analysis Pipeline
Comprehensive Use Case Specification: Automated Semantic Text Analysis Pipeline Objective Develop an automated semantic text analysis pipeline that processes and analyses textual data extracted from documents. This pipeline enriches text with metadata for deeper insights and enables semantic search capabilities through a user-friendly interface. This stage of the project is for a MVP system should leverage AWS services such as Textract for text extraction, a text categorising stage with a simple to use GUI, all-mpnet-base-v2 for embedding, and Postgres with a vector extension. This job posting is for the MPV stage only, but we must be mindful of the stage two development and facilitate rapid and straightforward scalability in any stage one MPV processes. System Overview The solution encompasses AWS services for storage and processing, a custom interface for metadata enrichment, all-mpnet-base-v2 for generating text embeddings, Postgres and a vector extension for efficient storage and retrieval of vectors, and a custom-built web interface for user interaction. RAG will be implemented with a broad a context as possible to the model across a large document set. Phase 1: MVP Stage 1. Document Storage and Processing Trigger - Tool: Amazon S3. - Process: Upload documents (PDFs initially) to designated S3 buckets, documents will be remained in accordance with a set naming convention and key metadata relating to the document entered into the database for future reference. This triggers the subsequent text extraction process. For test purposes the uploads will be made manually, and at later stages a web scraper will be added that automatically places PDF documents into relevant S3 buckets. 2. Text Extraction - Tool: AWS Textract. - Process: Text is extracted from uploaded PDF documents and temporarily stored in A3 buckets to facilitate further processing. 3. Text Enrichment Developer to advise on best method of adding labels / categories to the text, via an easy to use interface. Labels to be added at a granular level to allow the return of text snippets, providing context to the LLM in formulating it's responses from a broad range of documents without exceeding the token limit. 4. Text Vectorization - Embedding tool: all-mpnet-base-v2 - LLM: Amazon SageMaker (using LLaMA 2). - Process: The text is processed with LLaMA 2 to generate vector embeddings, capturing semantic information for advanced analysis and search functionalities. 5. Vector Storage - Tool: Postgres with a vector extension - Process: Text vectors are stored in the database, allowing for efficient management and retrieval of vectorized data for semantic searches. 6. Front-end Web Application and Search Functionality - Front-end Technology: React.js. - Key Features: - Semantic search input and results display. - email input field for collecting contact information for marketing purposes, forwarding to the client's email address. - Homepage containing descriptive marketing text. - 3 pages total: home page, interaction page, contact page, plus a pop up with GDPR info. Graphics provided as template guidance. - Back-end Technology: Python with FastAPI. Phase 2: Full Automation and Scaling 1. Automated Document Ingestion - Process: A web scraping tool is implemented to automatically identify and upload new documents to the S3 bucket, facilitating a continuous flow of data into the pipeline without manual intervention. 2. Scalable Architecture - Deployment: The application components are containerized using Docker and managed with Kubernetes (Amazon EKS), ensuring the system can scale efficiently to accommodate increased data volumes and user queries. 3. Enhanced Processing Capabilities - Improvements: Integrate additional NLP and ML models for broader and more nuanced text analysis. Consider fine-tuning custom models for specific domain applications. 4. User registration and user management system integration. Please note the attached contract agreement that will be deemed agreed to upon acceptance of the project. Your price given on PPH will be deemed to be your full and final price, and you will be deemed to have fully understood the scope, brief, and specification. To provide context, the project business plan has been uploaded. This is for context only and does not form part of the brief.
opportunity
Need Data Analytics from MySQL RDS
We have a huge MySQL database in AWS RDS.Need Analytical reports from the DB fastly and accurately.We have started a process for generating the reports by dumping the RDS into AWS S3 using DMS tasks and generating reports from S3 as it is effecting the RDS utilisation.Is this the correct process? And can any one suggest a good way to do so...
opportunity
Website scraping tool
# Composite Use Case for myClerk.ai Web Scraping Tool Development ## Project Overview The myClerk.ai project aims to automate the collection, organization, and monthly update of documents from approximately 10,827 UK council websites, including 10,450 parish and town councils and 377 larger councils. This initiative seeks to make council documents easily accessible and searchable, covering essential materials such as constitutional documents, terms of reference, minutes, and planning documents. ## Objectives - **Automate Document Extraction:** Develop a scraping tool to automate the retrieval of PDF documents across varied council websites, accounting for the unique structure and content of each site. - **Efficient Data Organization:** Utilize council reference codes to systematically organize documents on a web server. - **Monthly Updates:** Implement a mechanism to capture new documents on a monthly basis without duplicating existing files. - **Link Monitoring and Notifications:** Create a system to track and report broken links and facilitate updates or notifications to site administrators. - **Data Categorization for Larger Councils:** Classify documents on larger council websites for more efficient retrieval and analysis. ## Database Structure The development leverages a hybrid database approach: - **Relational Database (PostgreSQL):** Hosts a comprehensive list of councils and their metadata, crucial for guiding the scraping tool to the correct websites for document extraction. - **Vector Database:** Reserved for storing processed text from PDFs for content-based searches, but note that this element is separate from the scraping tool task. ## Suggested Technologies - **Web Scraping and Data Organization:** Python, with libraries such as BeautifulSoup, Scrapy, and Requests for web scraping and automation. AWS S3 for document storage and PostgreSQL on AWS RDS for data management. - **Server and Hosting:** AWS Lambda for cost-effective routine downloading tasks and Amazon Aurora Serverless for RDS to dynamically adjust computational capacity. - **Notification System:** AWS Lambda and SNS for monitoring and identifying broken links, sending notifications for action. ## Crawling and Scraping Process - **Crawling:** Implement a depth-controlled crawler to navigate each council's website, identifying webpages with PDF links at all levels. - **Scraping and Downloading:** Post-crawling, the tool will scrape the identified PDFs, checking against previous downloads to avoid duplication. The tool is designed to adapt to the diverse web structures of council sites, ensuring comprehensive document retrieval. ## Monthly Update Cycle - The tool will perform a complete cycle each month, identifying and downloading new or updated documents based on changes in file details, thereby keeping the database current without accumulating duplicates. ## Development and Testing - Prior to full deployment, the scraper will undergo a testing phase on a selection of websites to refine its operation, gradually scaling up to include the full range of targeted sites. Timescale: basic model for testing to be delivered as soon as possible, a number of weeks can be allowed for the full model, including deploying it to the host web server and connecting to the database.