
Docker Projects
Looking for freelance Docker jobs and project work? PeoplePerHour has you covered.
Backend Developer
About the Role We are looking for a skilled Backend Developer to design and build scalable server-side applications. You will be responsible for developing APIs, managing databases, and ensuring high performance and responsiveness of our applications. Key Responsibilities Design, develop, and maintain backend services and APIs Build scalable and secure server-side applications Work with databases to design efficient data models Integrate third-party services and APIs Optimize applications for speed and scalability Collaborate with frontend developers, DevOps engineers, and product teams Write clean, maintainable, and well-documented code Participate in code reviews and continuous improvement processes Requirements 3+ years of experience in backend development Strong knowledge of Node.js, Python, Java, or PHP Experience building RESTful APIs or GraphQL APIs Proficiency with databases such as PostgreSQL, MySQL, or MongoDB Experience with Git and version control workflows Knowledge of authentication, security, and API design Familiarity with cloud platforms (AWS, Azure, or Google Cloud) Nice to Have Experience with microservices architecture Knowledge of Docker and Kubernetes Experience with CI/CD pipelines Familiarity with Redis, Kafka, or message queues Benefits Competitive salary Flexible working hours Remote work options Professional development opportunities Collaborative international team
8 days ago66 proposalsRemoteCMS Developer (Remote)
About the Role We are looking for a talented CMS Developer to design, develop, and maintain scalable content management systems. You will work closely with designers, product managers, and backend engineers to deliver high-quality digital experiences across websites and web applications. Responsibilities Develop and maintain CMS-based websites and applications Customize CMS platforms such as WordPress, Drupal, or Contentful Build and maintain CMS themes, templates, and plugins/modules Integrate CMS with third-party APIs and services Optimize websites for performance, security, and SEO Collaborate with UX/UI designers to implement responsive designs Troubleshoot and resolve technical issues Maintain documentation and follow development best practices Requirements 3+ years experience working with Content Management Systems Strong experience with WordPress / Drupal / Headless CMS Proficiency in HTML, CSS, JavaScript Experience with PHP or Node.js Familiarity with REST APIs and GraphQL Experience with Git version control Knowledge of SEO best practices and web performance optimization Ability to work in a remote, collaborative environment Nice to Have Experience with Headless CMS (Strapi, Contentful, Sanity) Experience with React / Next.js Knowledge of Docker and CI/CD Experience with AWS, Azure, or Google Cloud What We Offer Competitive salary Fully remote work environment Flexible working hours Opportunity to work on innovative digital products Collaborative international team How to Apply
8 days ago49 proposalsRemoteopportunity
Trading bot development
Project Overview We are building a high-performance, low-latency trading engine designed for microstructure-based execution strategies in a high-tax (STT) environment. This is NOT a basic retail trading bot. This system requires advanced system-level engineering, multi-core CPU architecture control, shared memory communication, and real-time observability dashboard. The focus of this project is minimizing latency between signal generation and order execution while maintaining regulatory compliance (Order-to-Trade Ratio constraints). The developer must understand low-level performance optimization, concurrency architecture, and Linux system behavior. Core Technical Requirements Python Version (Mandatory) The engine must use: Python 3.13 Free-Threaded build (3.13t) NOT standard Python 3.10–3.12 Reason: Standard Python uses the Global Interpreter Lock (GIL), which blocks true parallelism. In low-latency systems, a 1–2ms delay caused by GIL contention is unacceptable. Multi-Core Architecture with CPU Core Pinning The engine must: Assign specific modules to specific CPU cores Use os.sched_setaffinity (Linux only) Prevent OS core migration (avoid context switching) Modules include: Sentinel (Risk & OTR monitoring) Sonar (Market entropy / regime detection) Oracle (Signal calculation loop) Execution Engine (Order placement) The goal is to eliminate unpredictable latency spikes caused by OS scheduling and cache invalidation. Inter-Process Communication Standard Python queues are NOT acceptable. Communication must use: multiprocessing.shared_memory Memory-mapped buffers Lock-free ring buffer architecture Reason: Standard queues introduce locking and object allocation overhead, increasing latency. The target is sub-millisecond internal communication between signal generator and execution engine. Latency Measurement The system must measure: End-to-end order placement latency Round-trip time (RTT) Module processing time Using: time.perf_counter_ns() Latency histogram logging This data must be streamed to the dashboard. Order Execution Logic The system should: Prefer passive limit orders Include 200ms cancel logic Manage Order-to-Trade Ratio (OTR) Implement controlled order flooding logic (compliant with broker rules) This is not a simple market order bot. FRONTEND REQUIREMENTS (React Dashboard) The frontend is NOT a trading UI. It is a real-time monitoring and control cockpit. Preferred stack: React (Vite or Next.js) WebSocket for live streaming Lightweight charting (Canvas or WebGL-based) Required Dashboard Modules Sentinel Panel Real-time RTT graph 20ms lockdown threshold indicator CPU usage per pinned core Emergency status Sonar Panel Market regime indicator (Attack / Veto mode) Entropy score display Zero-trust gate status Oracle Panel Weighted Order Book Imbalance (WOBI) heatmap Liquidity imbalance % Signal strength score Must use high-performance rendering (Canvas, not heavy SVG). Execution Panel Net Expected Value (NEV) Fill rate % Cancel rate Order-to-Trade Ratio (OTR) status Emergency Kill Switch Dashboard must include: Global kill switch Sends signal to monitoring service Monitoring service writes flag to shared memory Engine halts immediately Dashboard must NOT communicate directly with broker API. Deployment Requirements Linux-based environment (Ubuntu preferred) Dockerized setup preferred Separate processes: Trading engine Monitoring microservice React frontend Google Cloud compatible. 10 MOST IMPORTANT SKILLS TO ADD Attach these skills on Freelancer: Python 3 (Advanced Concurrency & Multiprocessing) Must understand GIL, free-threaded builds, shared memory. Low-Latency System Design Experience reducing microsecond-level bottlenecks. Linux System Programming Knowledge of CPU affinity, process scheduling, performance tuning. Multithreading & Multiprocessing Architecture Designing multi-core optimized applications. Memory Management & Shared Memory IPC Experience with mmap, shared memory buffers. Financial Market Microstructure Knowledge Understanding order books, liquidity imbalance, passive vs aggressive orders. WebSocket & Real-Time Streaming Required for live dashboard data. React.js (Performance-Optimized UI) Real-time data rendering without UI lag. Performance Profiling & Benchmarking Must measure and optimize latency. Cloud Deployment (Google Cloud / Linux VM / Docker) Production-ready deployment experience. VERY IMPORTANT Add this to filter weak developers: Applicants must answer the following: Have you worked with Python shared memory or mmap before? Have you implemented CPU core pinning on Linux? How would you measure internal engine latency? How would you prevent dashboard from affecting trading engine performance? This will eliminate 80% of generic bot developers.
9 hours ago14 proposalsRemoteFull Stack Development
Full Stack Developer (Python / JavaScript / AI Experience Preferred) We are looking for an experienced Full Stack Developer to join our team for ongoing development projects. The ideal candidate has strong backend and frontend experience, DevOps knowledge, and excellent communication skills. Requirements 3+ years of experience in full stack development Ability to work during EST timezone Strong experience with Python, JavaScript, and TypeScript (Experience with C#, .NET, or Java is also acceptable) Frontend experience with React, Vue, Angular, or similar frameworks Backend experience with Node.js, Django, Flask, or PHP Familiarity with DevOps practices (CI/CD, Docker, cloud platforms, etc.) Some AI/ML development experience is a plus Experience working across multiple industries or product domains Strong communication skills — preference for candidates with native or fluent American English Responsibilities Design and develop scalable full-stack applications Collaborate with cross-functional teams to deliver high-quality software Participate in architecture discussions and system design Write clean, maintainable, and well-tested code Communicate progress and technical details clearly with the team Nice to Have Experience with cloud platforms such as AWS, Azure, or GCP Experience building AI-powered features or integrating AI APIs Experience working in distributed or remote teams
22 days ago30 proposalsRemoteMake a (production/demo) level install of a DGX Spark cluster
Hi, I'm looking for an experienced AI infrastructure specialist — or a passionate enthusiast with solid hands-on experience — to build a robust, flexible, and high-performance backend foundation for OpenClaw (an open-source autonomous AI agent) on two NVIDIA DGX Spark systems connected via 200 GbE (ConnectX-7). The goal is a stable, demo-ready environment that showcases the power of open-source models. OpenClaw will run on a separate system and connect easily via OpenAI-compatible APIs (or equivalent best-practice interfaces). Everything should prioritize: Easy remote access and external connectivity (via Tailscale/ZeroTier). Fast performance within the hardware's unified memory constraints. Simple model switching/adding later (hot-swappable where possible). Persistent services with web UIs for live demos. On-demand tools that can spin up/down cleanly. Hardware & Current State 2× NVIDIA DGX Spark (Grace Blackwell, 128 GB unified LPDDR5x memory each, ARM64). 200 GbE interconnect + 1 GbE internet links. I have a basic cluster setup, timeshift snapshots, ZeroTier, and Tailscale already running. You're welcome to rebuild from scratch if that's cleaner and faster. Core Requirements (Persistent Where Possible) vLLM as primary inference engine with a large-context main model (e.g., Nemotron 120B or equivalent). Must support easy switching to newer models. Whisper (or best-practice alternative like faster-whisper) – ready for OpenClaw API integration. Piper TTS – ready for OpenClaw API/text-to-voice integration. All persistent services should run with clean web UIs for demo purposes. On-Demand Tools (Configured for Easy External/Tailscale Access + Web UIs) Ollama + web UI (for specific or scheduled models). OCR model + workflow (let's discuss the best option—e.g., EasyOCR/PaddleOCR—and data saving/integration with other tools). Image generation (primary for OpenClaw use) with multiple models available. LoRA training tools for image generation. RAG / vector DB (choose the best integration with OpenClaw and other tools—e.g., Qdrant, Chroma, or Milvus). Multi-agent capable dev tools / environment. Central Portal & Usability A single web-based portal (e.g., OpenWebUI or equivalent) for central access to all tools, easy model switching, admin controls, and live demos. Nice-to-Have / Optional Enhancements (quote separately if interested) Full 2-node clustering with tensor parallelism (e.g., using the open vLLM-DGX-Spark repo or Ray/NCCL). Docker Compose / lightweight Kubernetes orchestration for easy updates and portability. Monitoring dashboard (Prometheus + Grafana). NVIDIA NIM microservices for optimized inference. Any other best-practice tools you recommend for integration, speed, or flexibility. Your Profile You're deeply familiar with these tools (or eager to dive in as an enthusiast), NVIDIA DGX systems (especially Spark/Grace Blackwell), multi-node inference (vLLM, tensor/pipeline parallelism), Docker/containerization, and API integrations. You understand VRAM/unified-memory optimization and can make everything work together smoothly. Bonus if you have experience with OpenClaw, OpenWebUI, RAG pipelines, or agent frameworks. Proper English communication (written and spoken) is a must for smooth collaboration. Timeline & Expectations We have a tight deadline — I need a stable, running environment live as soon as possible. You're completely free to experiment, test, and play around with different configurations during setup, but the priority is delivering a functional, demo-ready system quickly. Speed matters, while still maintaining quality and stability. Compensation Competitive hourly rate (fully flexible and based on your region, experience, and the exact scope) or a fixed-price project bid if preferred. I'm completely open to discussion—propose whatever rate works best for you and your location. This project serves as a test case for potential further collaboration and ongoing work if it goes well. If you're the right fit, there will be plenty of exciting follow-up opportunities. Work Style & Availability This is remote work. I am completely flexible on working hours and not EU-bound. As long as you're excellent at what you do, I'm happy to work with talent from anywhere in the world (including low-income countries—great people deliver great results everywhere). If this sounds like a good fit, reply with: Your relevant experience (especially with DGX Spark, vLLM multi-node, or similar stacks — enthusiasts with strong practical knowledge are very welcome). Rough timeline and cost estimate (with your proposed rate). Any questions or suggested improvements. Looking forward to building something powerful together!
12 days ago14 proposalsRemoteSenior AI Developer /AI Automation Engineer (LLM, AI Agents,)
Job Overview: We are looking for an experienced AI Developer / AI Automation Engineer to design and build advanced AI applications, intelligent agents, and automation platforms similar to leading AI products for different industries.: • (AI-powered enterprise legal assistant) • (AI agent-based digital workforce platform) • (Generative AI business automation platform) • (AI analytics and marketing intelligence platform) These platforms leverage LLMs, AI agents, data pipelines, automation workflows, and API integrations to automate business operations, data analysis, marketing insights, and enterprise workflows. The ideal candidate should have strong experience in Generative AI, LLM development, AI agent frameworks, and automation systems. Key Responsibilities: 1. AI Application Development • Design and build AI-powered applications using LLMs • Develop AI agents capable of performing automated tasks • Implement retrieval augmented generation (RAG) systems 2. AI Agent & Workflow Automation • Build multi-agent AI systems • Create automated business workflows for: • marketing • analytics • customer support • data processing 3. LLM Integration • Integrate AI models such as: • OpenAI • Claude • Gemini • Llama • Fine-tune models for domain-specific applications. 4. Data & Knowledge Systems • Implement vector databases and semantic search • Build knowledge bases for enterprise AI assistants. 5. API & SaaS Integration Integrate AI with: • CRM systems • Marketing platforms • Google Ads / Meta Ads • analytics dashboards • internal business systems. 6. AI Analytics & Reporting Develop AI-powered analytics dashboards for: • marketing data • campaign optimization • predictive insights. Required Technical Skills AI / Machine Learning • Generative AI • Large Language Models (LLMs) • Prompt Engineering • RAG architecture • AI agents Programming Languages • Python (mandatory) • JavaScript / TypeScript • Node.js AI Frameworks Experience with at least one: • LangChain • LlamaIndex • AutoGPT • CrewAI • Semantic Kernel • HuggingFace AI Infrastructure • Vector Databases (Pinecone, Weaviate, Chroma) • Model deployment • AI APIs Cloud Platforms • AWS • GCP • Azure Databases • PostgreSQL • MongoDB • ElasticSearch DevOps • Docker • Kubernetes • CI/CD Preferred Skills • AI agent development • AI workflow automation • Data pipeline development • Marketing analytics AI • NLP systems • Chatbot / AI assistant development AI Products / Systems You May Work On Develop systems similar to: • AI Legal Assistant Platforms • AI Business Automation Agents • AI Marketing Analytics Platforms • Enterprise AI Knowledge Systems • AI SaaS Automation Tools Candidate Profile: We are looking for someone who: • Has strong experience building AI products • Understands LLM architecture and AI workflows • Can design AI SaaS platforms from scratch • Has experience building scalable AI systems Note: This is going to be a long term Full time position with good salary package. Please only apply if you are not working for any agency and you will be fully dedicated to work only with us since this is not one time project and we want those to be a part who want to be with us for long time. Best Regards Ashish
2 days ago29 proposalsRemoteI need an experienced AI engineer
We are hiring a Senior AI Engineer to own and accelerate AI capabilities across the platform — from customer-facing features to internal agentic workflows and production-grade AIOps. This is a high-impact, hands-on role where you will shape the next generation of AI systems and directly influence customer experience, product velocity, and operational efficiency. The Opportunity You will lead development across three core pillars: - AI Product Engineering – Ship core AI-powered features — intake, scheduling, session notes, billing automation, and more. - AI Foundations / Enablement – Build reusable primitives, evaluation tooling, and golden paths. - Agentic Workflows & AIOps – Create AI agents that automate operational work and support internal operations. What You’ll Do AI Product Engineering (Customer-Facing Features) - Build AI features end-to-end: requirements → design → implementation → rollout → evaluation. - Implement LLM-backed workflows including summaries, extraction, structured output, copilots, billing scrubbing, and guided automation. - Collaborate across Product, Design, QA, and Clinical SMEs. Own AI-powered improvements across: RCM, scheduling, intake, documentation, reporting, customer support automation. AI Foundations / Enablement - Create reusable AI primitives: prompt templates, agent patterns, tool schemas, safety guardrails, retrieval modules. - Build evaluation harnesses and continuous regression testing. - Enable developer productivity via AI coding tools and best practices. - Establish “golden paths” for consistent and safe AI feature delivery. Agentic Workflows & AIOps - Develop in-platform agents for billing checks, eligibility validation, claim scrubbing, data cleanup, workflow routing, and anomaly detection. - Build AIOps capabilities: incident summaries, RCA suggestions, intelligent alerts, correlation. - Integrate with telemetry systems for intelligent operations. LLMOps / Production-Ready AI - Implement monitoring, cost governance, retries, fallbacks, and safe-mode behavior. - Own prompt/model evaluation frameworks and online QA metrics. - Ensure HIPAA-aligned data flows: privacy controls, redaction, audit logs, PHI boundaries. Data & Infrastructure - Build vector stores, embeddings, retrieval workflows, and knowledge bases. - Partner with data engineering for eval datasets and rulesets. - Contribute to scalable infrastructure for AI agents and async workflows. Tech Stack - AI Platforms: Azure OpenAI, OpenAI, Anthropic, Bedrock - Frameworks: RAG, agents, orchestration - Backend: .NET, Node.js, Python - Data: SQL Server, Postgres, Redis, vector databases - Observability: Grafana, Loki, Prometheus, OpenTelemetry - Cloud: Azure, AWS, Docker, Kubernetes - AI Dev Tools: Copilot, Claude Code, Cursor, Kiro What You Bring - 5–10+ years building production software systems. - Hands-on experience delivering LLM-powered features or AI agents. - Strong engineering fundamentals. - Experience with LLMOps, safety, monitoring, evaluation. - Strong cross-functional communication. - Healthcare / HIPAA experience a plus. - Fluent English
25 days ago33 proposalsRemote
Past "Docker" Projects
opportunity
Host deploy and test and existing app
This project aims to deploy, on a dedicated VPS server, the complete environment required to run the WHIMO solution developed by the European Forest Institute. WHIMO is an Android application designed to collect, store, and transfer geolocation data to support compliance with the EU’s EUDR regulation. To operate properly, the mobile application relies on a stable, secure, and publicly accessible backend environment. The mission is to deploy, configure, and secure all backend services of WHIMO using the official GitHub repositories, and to ensure that the API is fully operational so the Android application can connect to it. This includes preparing the VPS, installing all dependencies (Docker, PostgreSQL, Redis, Gunicorn, HTTPS reverse proxy), setting up Firebase credentials, configuring the network and security layers, and delivering complete technical documentation. The final objective is to deliver a stable, secure, and operationnal hosted application on the new server environment, ready for use , while respecting best practices in development, security, and operations. Repositories Android Application: https://github.com/EuropeanForestInstitute/whimo-android Backend (Django + REST API): https://github.com/EuropeanForestInstitute/whimo-backend Infrastructure (Optional – IaC): https://github.com/EuropeanForestInstitute/whimo-infra To be provide : VPS credential
Python Developer / Browser Automation / Playwright
We operate a Python-based automation system that interfaces with a platform. The system is stable and actively in use, but requires ongoing maintenance, occasional bug fixes, and new feature development as the platform evolves. We're looking for a reliable freelance developer to join us on a flexible, ongoing basis and take ownership of the automation layer. Tech Stack – Python – Playwright (async, browserless) – FastAPI – MongoDB – Docker What You'll Work On – Writing, updating, and refining browser automation scripts that interact with third-party web applications – Handling session management, cookie persistence, and authentication flows to keep automations running reliably – Building and maintaining API integrations between our system and external services – Diagnosing and resolving issues when target platforms change their UI or behavior – Proposing and implementing improvements to make the system more robust, faster, or easier to maintain – Working within a Dockerized environment and following existing code conventions Ideal Candidate – Strong hands-on experience with Playwright in Python, particularly using the async API – Comfortable working with headless browsers, managing browser contexts, and debugging flaky or timing-sensitive automation – Familiar with FastAPI and RESTful API design – Experience with MongoDB or similar document-based databases – Able to read, understand, and contribute to an existing codebase without extensive onboarding – Self-directed and comfortable working independently with minimal supervision – Communicative, responsive, and dependable — we value people who flag issues early and keep things moving
opportunity
AI-Powered Price Scraper & Monitoring System (Multi-Website)
We are looking for an experienced developer to build a scalable AI-powered price scraping and monitoring system. The system should automatically extract product pricing data from multiple e-commerce websites and store it in a structured database for monitoring and analysis. The system must support multi-tenant architecture, role-based permissions, subscription tiers, and Stripe payment integration. The goal is to allow different companies to monitor product prices across multiple websites, with usage limits based on subscription plans. Project Scope 1. Target Websites • Scrape product prices from 7–10 e-commerce websites • Support dynamic content (JavaScript-rendered pages) • Proxy rotation & anti-bot handling • Scheduled scraping • Historical price tracking • Price change alerts (email or webhook) • Handle pagination and product variations 2. Multi-Tenant Architecture • Super Admin role • Manage all companies • Manage subscription plans • View system-wide usage • Suspend / activate companies 2. Data Extraction • Extract product name • Current price • Original price (if available) • SKU / Product ID • Availability status • Timestamp 3.1 Company Admin role • Manage company users • Set scraping targets (websites & products) • View company usage stats 3.2 Company Users • View price tracking dashboard • Access only assigned websites/products 3.3 Subscription & Usage Limits System must support different plan levels: Each plan should control: • Maximum number of websites • Maximum number of products • Scraping frequency (e.g., 1h / 3h / 6h / 24h) • Maximum concurrent scraping jobs • Historical data retention length Stripe Integration • Stripe subscription integration • Monthly / Yearly billing (7 days free trial) • Webhook handling for subscription status updates • Automatic feature unlock based on plan • Auto suspend account if payment fails • Admin ability to manually upgrade/downgrade plan 4. AI-Assisted Selector Detection • Use AI or intelligent selector logic to detect price elements • System should adapt if minor DOM changes occur • Minimize manual reconfiguration 5. Infrastructure • Proxy rotation support • Anti-bot handling • Headless browser support (e.g., Puppeteer / Playwright) • Scalable deployment (Docker preferred) 6. Database & Storage • Store data in MySQL • Historical price tracking • Ability to compare price changes 7. Monitoring & Automation • Scheduled scraping (e.g., every 1–6 hours) • Email or webhook alerts when price changes • Logging and error reporting 8. Dashboard • Admin and users dashboard • Search by product • View historical price chart Technical Requirements Preferred stack: • Laravel • Playwright / Puppeteer / Scrapy • REST API architecture • Docker deployment Deliverables • Fully working scraping system • Deployment guide • Source code • Documentation • 2 weeks post-delivery support Bonus Experience with anti-bot bypass, rotating residential proxies, and large-scale scraping is highly preferred. If interested, please include your portfolio and examples of similar scraping projects.
N8n Automation Specialist (Intermediate)
We are looking for an intermediate n8n user to assist with automating various business tasks and building reliable workflows. This will be ongoing work (as-needed), starting with 1–3 initial automations and then expanding as we identify more opportunities. What I need help with Building and improving n8n workflows for day-to-day automation Connecting tools via API integrations (REST), webhooks, and triggers Automating repetitive tasks like: Data syncing between apps (e.g., Excel/Slack/Notion/CRM) Email/Slack/Teams notifications Form submissions → routing → follow-ups File processing and structured data transformations Scheduled reports and automated updates Adding error handling, retries, logging, and alerts so workflows are stable Working with authentication methods (OAuth, API keys, tokens) Light custom logic using Function nodes (JavaScript) when needed Deliverables / expectations Quick discovery of the process (short call or written brief) Build workflows in n8n with clean structure and clear node naming Testing + edge-case handling (rate limits, duplicates, failed runs) Short documentation / handover notes for each automation Small improvements over time + ongoing maintenance Experience deploying/hosting n8n (Docker, VPS, cloud hosting) Familiarity with databases (Postgres/MySQL), webhook security, and queues Experience integrating common tools (Google Workspace, Slack, HubSpot, Shopify, Stripe, Trello, Asana, ClickUp, etc.) Communication / time zone I’m in Asia/Dubai time zone. Some overlap for messages/calls is preferred, but asynchronous is fine if you communicate clearly. To apply, please include A brief summary of your n8n experience 1–2 examples of workflows you’ve built (high-level description is fine) Your approach to error handling & monitoring in n8n Your hourly rate and availability this week.
opportunity
Custom Contractor Management System (Replace Tradify)
Project Name: Custom Contractor Management System (Replace Tradify) Project Type: Full custom web + mobile application Overview: We are building a full contractor management system to manage engineers, subcontractors, jobs, timesheets, invoicing, GPS tracking, and reporting. The system must be scalable (currently 9 engineers, future 500+), secure, GDPR-compliant, and integrate with Sage first and Xero later. We need a freelancer (or small team) to develop both backend + frontend, mobile apps, and database according to detailed specifications. ⸻ 1️⃣ Key Features / Requirements A) Web Dashboard (Managers/Admins/Accountants) • Job management (create, assign, track status, attach files/photos) • Subcontractor management (assign jobs, track jobs, generate POs, track invoices) • Client invoice management (create, track, integrate with Sage) • Reports: Timesheets, material usage, profit analysis • Engineer live map / GPS tracking overview • Alerts: overdue invoices, missing photos, incomplete jobs • Role-based access: Admins, Managers, Accountants, Field Supervisors B) Mobile App (Engineers) • Job list (assigned / in progress) • GPS tracking (real-time + periodic, check-in/out) • Job report form: • Time on site (auto/manual) • Travel time • Materials used • Parking / fees • Findings & recommendations • Tick-box checklists • Photos (before/during/after) • Submit reports to web dashboard • Timesheet tracking + weekly summary C) Subcontractor Module • Assign jobs to subcontractors • Track job status • Generate Purchase Orders (POs) • Track subcontractor invoices (manual + CSV/XLSX upload) • Automatic reminders for due / overdue invoices • Exportable / Sage integration D) Invoicing Module • Quote → Job → Invoice workflow • Retainers / deposits • Recurring invoices • PDF export • Sage integration first → Xero later E) File Upload / Import • CSV/XLSX upload for subcontractor invoices • Validate fields, duplicates, missing info • Track manual vs file-uploaded invoices F) Reporting • Job summary, material usage, profit analysis • Timesheets & payroll export • Engineer GPS history / route playback ⸻ 2️⃣ Technical Requirements • Backend: Node.js + NestJS • Web Frontend: React + TypeScript • Mobile App: React Native (iOS + Android) • Database: PostgreSQL • Realtime cache / GPS: Redis • Hosting: AWS + Docker + CI/CD • Accounting integration: Sage first, Xero later • Notifications: Push + Email • GDPR-compliant storage and encryption ⸻ 3️⃣ Deliverables • Fully functional web dashboard • Mobile apps for engineers (iOS + Android) • Subcontractor management module (web + optional mobile) • Invoicing module with Sage integration • Timesheet + GPS tracking module • Database schema & API endpoints • File upload / import functionality • Deployment scripts (AWS / Docker / CI/CD) • Documentation (user manual + API documentation) ⸻ 4️⃣ Project Phases / Milestones Phase 1 – MVP: • Engineer mobile app (GPS + job reports + timesheets) • Web dashboard (job management + reporting) • Subcontractor module (manual + file upload invoices + POs) • Invoicing (Sage integration) Phase 2 – Optional: • Advanced reporting / analytics • Xero integration • Material stock & procurement • Client portal Phase 3 – Optional / Future: • SaaS multi-company version • AI-assisted job report summary • Fleet & asset tracking ⸻ 5️⃣ Requirements from Freelancer • Experience with Node.js, React, React Native, PostgreSQL, AWS • Experience building CRM / ERP / field service apps • Ability to design scalable architecture • Experience with API integration (Sage/Xero) • Ability to handle file uploads, CSV/XLSX imports, and validation • Strong English communication and documentation skills • Deliver code in phases/milestones • Provide full technical documentation + deployment scripts
opportunity
Interactive AI Experience – 3D Guide & Custom Image Gen
I am an artist developing a browser-based interactive ritual experience where a 3D speaking character guides participants through a reflective AI-driven dialogue about the future. At the end of the interaction, the system produces: • A symbolic, poetic spoken response • One AI-generated image based on the participant’s clarified vision, rendered in a custom visual style trained on my artwork This is a poetic, immersive digital art experience, not a generic chatbot or commercial tool. Deliverable: A mini website / web module that can be integrated into an existing website (for example, as a subpage or subdirectory). Scope Clarification The generated images will later be shown in a separate digital “wall” project built by another team. This job does NOT include building that wall interface. Your responsibility is to: ✔ Generate the images ✔ Store them with structured metadata ✔ Make them exportable for future integration Technical Constraints (Non-Negotiable) - • Open-source / open-weight AI models only (LLM, image generation, TTS, STT) • Self-hosted deployment on my infrastructure (Hetzner servers) • No proprietary AI APIs Core User Experience Flow - - Short conceptual intro animation - 3D character appears and speaks, introducing the ritual - User selects one of five thematic prompts - User shares a vision (text input; voice input optional bonus) - AI-guided dialogue (2–4 turns) to clarify the scenario - Final symbolic spoken response from the character - One AI-generated image created from the clarified vision - Session data saved for archive and future visual display Technical Requirements - Frontend (Mini Website) • Immersive but lightweight interface • Smooth transitions between stages • Audio playback (music + character voice) • Responsive design (desktop + mobile) • Built using React / Next.js or similar 3D Speaking Character - • WebGL / Three.js / A-Frame (or similar) • Rigged character model (provided) • Idle animation • Speaking animation synced to audio (lip sync preferred, amplitude-based acceptable for MVP) AI Dialogue System (Open-Source LLM) - • Self-hosted open-weight model • Multi-turn conversation handling • Structured prompting system • Outputs: – follow-up prompts – final poetic response – structured summary for image generation Voice System (Open-Source TTS) - • Open-source text-to-speech hosted on server • Audio drives speaking animation Custom Style Image Generation - The generated image must consistently match a custom artistic visual language based on my artwork. Prompting alone is not enough. You must implement: Preferred: LoRA training using my artwork dataset Alternative: Style adapter / reference conditioning Requirements: • One image per session • Seed reproducibility • Style strength control • Save prompt + generation parameters Backend & Storage Store for each session: • Selected prompt theme • Dialogue transcript • Final spoken response • Scenario summary • Image prompt + parameters • Generated image file • Timestamp Admin Panel Simple password-protected page to: • View sessions • Download text and images Deployment Requirements • Linux deployment on Hetzner • Docker / Docker Compose preferred • Documentation for: – setup – model downloads – environment variables – running services – updating style model Project Timeline Total duration: 2 months Skills Required • Web 3D (Three.js / A-Frame / WebGL) • Experience integrating animated 3D characters in the browser • Experience serving open-source LLMs • Diffusion model LoRA or adapter training • Backend/API development • Docker + Linux deployment How to Apply Please include: 2–3 relevant projects (AI apps, WebGL/WebXR, or interactive experiences) Proposed tech stack (frontend, backend, model serving) Which open models you would use (LLM, diffusion, TTS) and why Recommended server setup (GPU/VRAM) for acceptable performance Screening Questions How would you sync speech audio to a 3D character animation in the browser? Which open-weight LLM would you deploy and how would you serve it? How would you train and deploy a custom style LoRA for image generation? What server setup would you recommend and why?
Full-Stack AI Engineer (React + Python + NLP)
We are looking for a Full-Stack AI Engineer with strong experience in React and Python, and hands-on expertise in Natural Language Processing (NLP) and LLM-based systems. This role involves building AI-powered web applications where modern frontend interfaces integrate seamlessly with intelligent backend services. You will work on designing scalable systems that process, analyze, and generate text using NLP and machine learning techniques. We’re seeking someone comfortable operating across the full stack — from responsive UI development to AI model integration and backend architecture. Core Responsibilities * Develop modern frontend applications using **React** * Design and build backend services using **Python** * Develop and integrate **NLP pipelines** and LLM-based features * Implement RAG-based or AI-assisted workflows * Build APIs that connect AI services with frontend applications * Design scalable, maintainable system architecture * Optimize model performance and response latency * Ensure clean, well-tested, and production-ready code Required Technical Skills * Strong experience with **React** * Strong proficiency in **Python** * Hands-on experience with **NLP libraries** (e.g., spaCy, NLTK, Hugging Face Transformers) * Experience integrating **LLMs via APIs** * Experience building RESTful APIs (FastAPI, Flask, or similar) * Experience handling structured and unstructured text data * Solid understanding of backend architecture and data modeling Nice to Have * Experience with RAG architectures * Experience with vector databases (e.g., Pinecone, Weaviate, FAISS) * Experience with cloud platforms (AWS, Azure, or GCP) * Docker/containerization experience * Experience deploying AI models to production * Familiarity with prompt engineering and evaluation workflows Engagement Details * Contract or freelance role * Remote * Initial short-term engagement with potential extension * Rate based on experience
Collabora Online + Nextcloud Integration – Issue
Collabora Online + Nextcloud Integration – Status & Outstanding Issue Environment Server: Dedicated Linux server (Plesk, EL9) Reverse proxy: Plesk-managed nginx (in front of Apache) Nextcloud: Self-hosted instance (HTTPS) Collabora: External Collabora Online server (Docker-based) Proxy model: nginx reverse proxy CDN / WAF: Disabled during testing SSL: Terminated at nginx (Collabora running without internal SSL) What Has Been Completed (and Verified) 1. Collabora container deployment Collabora Online is running in Docker Exposed internally on port 9980 Container starts cleanly and remains healthy Verified via direct local HTTP request: /hosting/discovery → HTTP 200 OK 2. nginx reverse proxy configuration nginx successfully proxies Collabora endpoints: /hosting/discovery /cool /browser WebSocket upgrades are enabled and functioning Verified externally over HTTPS: /hosting/discovery → HTTP 200 OK 3. SSL / proxy alignment SSL is terminated at nginx Collabora started with: SSL disabled internally SSL termination enabled This matches the reverse proxy architecture No TLS or certificate errors observed 4. CDN / firewall eliminated as a factor CDN/WAF fully disabled during testing Requests go directly from client → nginx → Collabora No CDN headers or interference present 5. Nextcloud Office configuration “Use your own server” selected External Collabora URL configured (HTTPS) Built-in CODE server disabled Demo server not used Admin UI reports Collabora as reachable 6. WOPI allow list configuration WOPI allow list configured in Nextcloud No token-based authentication configured Certificate verification left enabled (public certificate in use) 7. nginx validation nginx configuration validates successfully Reload performed via Plesk tooling No nginx syntax or runtime errors Current Behaviour (What Works) Collabora service starts normally Reverse proxy works correctly Discovery XML loads both: Internally (direct to port 9980) Externally (via HTTPS proxy) WebSocket connections establish successfully No networking, firewall, or SSL failures observed Current Issue (Still Failing) Error shown in Nextcloud UI Document loading failed Unauthorised WOPI host. Interpretation of the Error Collabora is rejecting the WOPI request Rejection happens after discovery and WebSocket setup This indicates a WOPI host authorisation failure, not a connectivity issue Likely Root Cause (High Confidence) The Collabora domain allowlist regex does not exactly match the WOPI host string generated by Nextcloud. Typical causes: Host includes an explicit port (e.g. :443) Regex not anchored (^ / $) Multiple valid hostnames not fully accounted for This is a known and common Collabora failure mode. Evidence Supporting This Conclusion Network path is fully functional SSL termination is correct Discovery and WebSockets work Error is specific to WOPI authorisation CDN/WAF is disabled Logs show normal Collabora startup and request handling up to WOPI validation Trying to get AI apps setup and working, trying to do this broke the Next docs.
Full Stack / DevOps / AI Engineer
We’re looking for a hands-on Full Stack / DevOps / AI Engineer to help finalize and deploy core systems for eyeora. Responsibilities Set up and maintain a production-ready deployment pipeline across GCP and AWS Configure CI/CD, infrastructure, environments, and monitoring Complete and ship an existing technical task to production standard Integrate AI APIs (e.g. LLMs, vision, audio, automation) into eyeora’s platform Work closely with product and engineering to optimize performance, cost, and scalability Requirements Strong experience with full stack development (frontend + backend) Proven DevOps experience with GCP and/or AWS (Docker, CI/CD, cloud services) Experience integrating and deploying AI/ML APIs Ability to work independently and deliver end-to-end solutions Pragmatic, startup-minded, execution-focused Nice to have Experience with media, XR, 3D, or real-time systems Cost optimization and scaling experience This is a practical, delivery-focused role with real ownership and impact on the eyeora platform.
Senior Software Engineer - Long Term Collaboration
This is Long Term collaboration in Software Development Looking for an experienced Senior Software Engineer Collaborators who can design, build, and maintain scalable software systems based on United States. You will play a key role in technical decision-making, system architecture, and mentoring junior engineers while collaborating closely with product, design, and infrastructure teams. Key Responsibilities - Design, develop, and maintain scalable and reliable software applications. - Lead technical design discussions and contribute to system architecture decisions. - Write clean, maintainable, and well-tested code. - Review code and mentor junior and mid-level engineers. - Collaborate with cross-functional teams to deliver high-quality products. - Troubleshoot production issues and optimize system performance. - Contribute to continuous improvement of development processes and best practices. Required Qualifications - Strong software engineering experience in modern development environments - Proficiency in at least one major programming language (e.g., JavaScript, Python, Go, Java, or similar) - Strong understanding of system design, APIs, and distributed systems - Experience with cloud platforms (AWS, GCP, or Azure) - Solid understanding of databases (SQL and/or NoSQL) - Experience with CI/CD pipelines and version control systems (Git) - Strong problem-solving and communication skills Nice to Have - Experience working in high-growth startups or product-focused companies - Experience with microservices architecture - Knowledge of containerization (Docker, Kubernetes) - Exposure to blockchain or Web3 technologies (optional but a plus) What We Offer - Competitive compensation - Flexible work environment - Opportunity to work on impactful and innovative products - Career growth and leadership opportunities Location US Remote
AI/ML Engineer (Python, AWS/Azure) – EST/CST Time Zone
Job Description We are looking for a **mid-level AI/ML Engineer** with **strong English communication skills** and **2–3 years of hands-on AI/ML experience**. This role requires close collaboration with stakeholders, so **clear communication is critical**. Responsibilities - Design, build, and deploy **AI/ML models** using **Python** - Develop and maintain **ML pipelines** (training, inference, monitoring) - Work with **AWS or Azure** services for deployment and scaling - Collaborate with cross-functional teams in **EST/CST time zones** - Clearly explain technical ideas to non-technical stakeholders Required Qualifications - **2–3 years of real AI/ML experience** - Strong **Python** skills (NumPy, Pandas, scikit-learn, PyTorch/TensorFlow) - Experience with **AWS or Azure** (SageMaker, EC2, Lambda, Azure ML, etc.) - **Excellent English communication skills - Able to work **fully aligned with EST/CST working hours** Nice to Have - Experience with **LLMs / GenAI** - MLOps tools (MLflow, Airflow, Docker) - APIs using FastAPI or Flask Work Details - Remote - Full-time or Contract (your choice) - Must be available during **EST/CST business hours**
Remote Backend Developer (Node.js or Python)
We are seeking a proficient and dependable Remote Backend Developer with expertise in either Node.js or Python to bolster our expanding engineering team on a contract basis. Candidates from Latin America or Europe are preferred. The ideal applicant will possess a strong command of English, with a demonstrated ability to construct scalable APIs and backend systems. Familiarity with technologies such as PostgreSQL, MySQL, Docker, and cloud platforms like AWS or GCP is essential. Responsibilities include developing and maintaining backend services, collaborating with frontend teams, and ensuring code quality through reviews. If you are passionate about clean coding and remote work, we invite you to apply.
Java Develper Spring Boot & Microservice Developer
Java Spring Boot Developer (Micro services) – 3+ Years Experience We are looking for an experienced Java Developer (Spring Boot) with strong hands-on expertise in microservices-based development and deployment. The ideal candidate should be able to design, develop, integrate, and deploy scalable micro services in a production environment. Key Responsibilities Develop and maintain microservices using Java, Spring Boot, Spring Cloud. Implement RESTful APIs, integrations, and backend business logic. Work with MySQL/PostgreSQL, JPA/Hibernate, caching & messaging queues. Manage deployments on AWS / Docker / Kubernetes (any cloud experience preferred). Optimize system performance, troubleshoot issues, and ensure high availability. Collaborate with team to deliver clean, efficient, and well-documented code. Requirements 3+ years of experience in Java & Spring Boot. Strong understanding of Microservices architecture. Experience in API development, CI/CD pipelines, Docker, Git. Ability to work independently with minimal supervision. Good communication and problem-solving skills. Project Type Ongoing development & enhancements Freelance / remote work with flexible timing