
AI Engineer Needed to Optimize LangChain + AWS Bedrock App
- or -
Post a project like this- Posted:
- Proposals: 20
- Remote
- #4444545
- Awarded


Description
You’ll be responsible for targeted performance fixes focused on measurable speed gains. The work includes optimizing Bedrock configuration, implementing real token-by-token streaming, adding Redis caching to replace S3-based message storage, and validating performance improvements with before-and-after latency metrics.
Estimated 6 hours of work.
Tasks
Optimize Bedrock Model Configuration: update bedrock_config.py to disable thinking mode, remove unnecessary budget_tokens, and lower temperature from 1.0 to around 0.2–0.3 for deterministic, faster responses. Confirm that the configuration change reduces token generation delay and verbosity.
Implement Real Token Streaming (Backend): replace agent.invoke with a streaming method using Bedrock ConverseStream or LangChain’s stream API. Ensure partial tokens are sent to the client in real time and test time-to-first-token performance.
Enable Live Streaming Display (Frontend): update the React frontend to handle streamed events progressively so users see text as it generates. Confirm the UI starts displaying output within 2–3 seconds of sending input.
Add Redis Caching for Chat Session Memory: replace S3-based chat history with Redis for in-memory storage. Update the chat_history_manager logic, validate cache persistence, and confirm message load time is near-instant.
Measure and Document Latency Improvements: record baseline timing (total response and time-to-first-token), re-measure after optimizations, and summarize the before/after results. Confirm at least a 4–5× improvement in perceived speed. All optimizations must preserve the exact response content and formatting from the LLM - only response speed may change.
Deliverables
• Updated, tested backend and frontend code (GitHub commit or zip)
• Before/after latency test results (text or JSON summary)
• One short summary of what was changed and verified
Questions - please answer all in proposal
Describe your experience optimizing latency in LangChain or Bedrock-based applications.
Have you implemented real token streaming (not chunked post-processing) before?
What is your preferred setup for Redis caching in a Python/AWS environment?
Are you comfortable modifying both Python backend and React frontend code?
Can you start immediately and complete project within 48 hours of getting contract offer?
Neeraja R.
100% (7)New Proposal
Login to your account and send a proposal now to get this project.
Log inClarification Board Ask a Question
-

I have a questions in my mind.
You mentioned that it is 6 hours of work.
How you Estimated that it is for 6 hours of work? -

Hi, Thanks for your job posting and I have a few questions for the project:
Could you confirm which AWS Bedrock model and configuration are currently being used (e.g., Claude 3 Sonnet, Haiku, or Titan), and whether streaming support is already enabled in the Bedrock console?
How is your Lambda concurrency and timeout setting configured right now? Sometimes performance bottlenecks arise from cold starts or low concurrency limits.
Do you already have a CloudWatch dashboard or logging system tracking latency metrics, or should I implement a timing profiler around key LangChain calls?
Best regards.