How to Build an AI Weather Chat Assistant with Amazon Bedrock, Django & Open-Meteo
- 2 days ago
- 11 min read
Updated: 1 day ago

Introduction
You know the drill. A user types "What's the weather like in Tokyo?" and your app either opens a Google Maps embed or hits a rigid REST endpoint that spits back raw JSON — leaving all the heavy lifting of parsing and presenting data to you. Building a truly conversational weather app means wiring together natural language understanding, live data, and a coherent response pipeline. That's three separate problems, and solving them the traditional way means months of custom NLP, brittle API glue, and a fragile chat UI.
The AI Weather Chat Assistant solves all three in one cohesive Django application. The user types any natural language weather question. An Amazon Bedrock–hosted LLM (Claude 3 Sonnet, Cohere Command R, or Amazon Titan) infers the geographic coordinates from the location name, calls a live weather API via a registered AI tool, and returns a conversational, emoji-enhanced response — all within seconds.
Real-world use cases:
Developers learning Amazon Bedrock's tool use (function calling) API patterns
CS students building full-stack AI applications for portfolios
Freelancers prototyping conversational AI interfaces for travel or logistics clients
Startups needing multi-model LLM integration backed by live external data
AWS practitioners wanting hands-on experience with the Bedrock Converse API
This post walks through the architecture, tech stack, and implementation phases. It does not include full source code — that's available in the full course on labs.codersarts.com.
📄 Before you dive in — grab the free PRD template that maps out this entire system: architecture, API spec, sprint plan, and system prompt. [Download the free PRD]
How It Works: Core Concept
The Underlying Idea: LLM Tool Use (Function Calling)
Large language models are excellent at understanding intent and generating language — but they don't natively know what the temperature in London is right now. The solution is tool use (also called function calling): you define a structured tool specification, register it with the LLM, and when the model needs live data it emits a special tool_use message instead of a text response. Your backend intercepts that signal, calls the real data source, and feeds the result back into the conversation. The model then synthesises a final, grounded answer.
The naive approach — simply asking an LLM "What's the weather in Paris?" without tool use — fails for two reasons: the model's knowledge has a training cutoff (so it cannot report current conditions), and hallucinated weather data is dangerously plausible. Users trust weather reports; a model inventing "sunny, 22°C" when it's actually raining is worse than no answer at all.
Tool use solves this by constraining the model to only report weather data it received from the Weather_Tool, which calls the Open-Meteo API in real time.
The Pipeline at a Glance
SETUP (one-time per request)
User types query
│
▼
Django backend receives POST (AJAX)
│
▼
Bedrock Converse API called with:
- System prompt (weather-only constraint)
- User message
- Weather_Tool spec (latitude + longitude schema)
│
▼
LLM emits stop_reason: "tool_use"
→ tool name: Weather_Tool
→ inferred lat/lon from location name
RUNTIME (per tool invocation)
Django backend intercepts tool_use signal
│
▼
Open-Meteo API called with lat/lon
→ returns current temperature, humidity,
wind speed, weather code, pressure, etc.
│
▼
Tool result returned to Bedrock conversation
│
▼
LLM emits stop_reason: "end_turn"
→ Conversational weather report with °C/°F
│
▼
JSON response sent back to browser chat UI
Analogy: Think of the LLM as an expert travel concierge. When you ask "Is it a good day to visit the Eiffel Tower?", the concierge doesn't guess — they call the local bureau météorologique (Open-Meteo), get the report, and then advise you in plain English whether to bring an umbrella. The tool is the phone call; the model is the concierge synthesising the result.
System Architecture Deep Dive
Architecture Overview
The application has five distinct layers, each with a clear responsibility.
Frontend Layer — A Bootstrap 5 chat interface rendered by Django templates. Vanilla JavaScript handles form submission via the Fetch API, renders typing indicators while awaiting responses, and injects bot messages into the chat DOM. A model selector dropdown lets users switch between Claude 3 Opus/Sonnet/Haiku, Cohere Command R/R+, and Amazon Titan without a page reload.
Backend Layer — A Django 4.2 application with two views: index (renders the chat page) and chat_results (processes POST requests and returns JSON). The process_weather_request function orchestrates the entire Bedrock conversation loop. A recursive helper, process_model_response, handles the multi-turn tool_use pattern until the model reaches end_turn.
AI Layer — Amazon Bedrock's Converse API, accessed via Boto3. The Weather_Tool is registered as a toolSpec with a JSON schema requiring latitude and longitude. The system prompt enforces that the model only answers weather questions, always uses the tool for data, and formats temperatures in both metric and imperial units.
Data Layer — SQLite for development (Django's default). No custom models are defined — the app is stateless per request, with no conversation history persisted to the database. The Open-Meteo API response is ephemeral, used only within the request lifecycle.
External API Layer — Open-Meteo (api.open-meteo.com/v1/forecast) provides current weather: temperature, apparent temperature, humidity, precipitation, rain, snow, cloud cover, wind speed/direction/gusts, and WMO weather codes. No API key is required.
Component Table
Component | Role | Options |
Web Framework | Handles routing, views, templates | Django 4.2, Flask, FastAPI |
Frontend UI | Chat interface, AJAX form submission | Bootstrap 5 + vanilla JS, React, HTMX |
LLM Provider | Runs the AI model with tool use | Amazon Bedrock, OpenAI, Anthropic direct API |
AI Models | Understands queries, infers coordinates | Claude 3 (Sonnet/Opus/Haiku), Cohere Command R/R+ |
Tool Spec | Defines the Weather_Tool schema | Custom JSON schema via Bedrock toolConfig |
AWS SDK | Calls Bedrock Converse API | Boto3 (Python), AWS SDK for JS |
Weather API | Provides real-time weather data | Open-Meteo (free), OpenWeatherMap, WeatherAPI |
Config Management | Loads AWS keys and model IDs | django-environ, python-dotenv, AWS SSM |
Static Files | Serves CSS/JS in production | WhiteNoise, AWS S3 + CloudFront |
WSGI Server | Runs Django in production | Gunicorn, uWSGI |
Data Flow Walkthrough
User types "What's the weather in Sydney?" and submits the chat form.
JavaScript intercepts the form submit, appends a loading indicator to the chat, and POSTs {location, model_id} to /chat/results/ via Fetch API.
Django's chat_results view validates the POST body and calls process_weather_request(location, model_id).
A Boto3 session is created using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from environment variables.
The Bedrock converse call is made: system prompt + user message + Weather_Tool spec.
Bedrock returns stopReason: "tool_use" with input: {latitude: -33.87, longitude: 151.21}.
invoke_tool calls fetch_weather_data with those coordinates against Open-Meteo.
Open-Meteo returns a JSON payload with 15+ current weather fields.
The tool result is appended to the conversation as a user role message.
Bedrock is called again with the updated conversation; the model returns stopReason: "end_turn".
The text response is extracted and returned as {error: false, message: "..."} JSON.
JavaScript replaces the loading indicator with the rendered weather report.
Non-Obvious Design Decisions
Decision 1 — Recursive tool-use loop with a depth cap. The process_model_response function calls itself recursively, decrementing max_recursion on each call (default: 5). This guards against infinite loops if the model repeatedly emits tool_use without reaching end_turn. A depth of 5 is sufficient for all single-location queries while preventing runaway API costs.
Decision 2 — Stateless request architecture. There are no Django models for storing conversation history. Each request is a fresh conversation. This is intentional: weather queries are transactional, not conversational over time. Statelessness keeps the app horizontally scalable and eliminates session management complexity — but it also means the model cannot remember what it told the user five minutes ago.
Tech Stack Recommendation
Stack A — Beginner / Prototype (Weekend Build)
Layer | Technology | Why |
Backend | Django 4.2 | Batteries included — ORM, admin, forms, templates |
AI Provider | Amazon Bedrock (Claude 3 Haiku) | Lowest latency + cost for prototyping |
Weather API | Open-Meteo | Free, no API key, 15+ current fields |
Database | SQLite | Zero config, file-based |
Frontend | Bootstrap 5 + vanilla JS | No build toolchain needed |
Config | python-dotenv | Simple .env file management |
Deployment | Railway or Render (free tier) | Git push to deploy |
Estimated monthly cost: ~$3–8 (Bedrock inference at low volume; Railway/Render free tier covers hosting).
Stack B — Production-Ready (Designed to Scale)
Layer | Technology | Why |
Backend | Django 4.2 + Gunicorn | Production WSGI server, battle-tested |
AI Provider | Amazon Bedrock (Claude 3 Sonnet) | Better reasoning for complex location inference |
Weather API | Open-Meteo + caching layer | Redis cache reduces API calls for repeat queries |
Database | PostgreSQL (AWS RDS) | Persistent, scalable, supports connection pooling |
Frontend | Bootstrap 5 + HTMX | Eliminates custom fetch boilerplate |
Config | AWS Systems Manager Parameter Store | Encrypted secrets, IAM-controlled access |
Static Files | S3 + CloudFront | CDN-served, scales globally |
Deployment | AWS Elastic Beanstalk or ECS Fargate | Auto-scaling, health checks |
Monitoring | AWS CloudWatch + X-Ray | Request tracing, Bedrock cost monitoring |
Auth | AWS IAM roles (instance profiles) | No credentials in environment variables |
Estimated monthly cost: ~$80–150 (RDS t3.micro ~$25, EB environment ~$30, Bedrock at moderate volume ~$20–80, CloudFront + S3 ~$5).
Implementation Phases
Phase 1: Project Setup & AWS Configuration
The first phase covers scaffolding the Django project, installing dependencies (boto3, django-environ, requests, gunicorn, whitenoise), and configuring AWS credentials. You will create a .env file with AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_MODEL_ID. The Django settings file reads these via django-environ, keeping secrets out of source control.
Key decisions:
Which AWS region to use (us-east-1 has the broadest Bedrock model availability)
IAM policy scope — least-privilege access to bedrock:InvokeModel and bedrock:Converse only
Whether to use IAM user credentials (development) or instance profiles (production)
Setting up IAM policies with the minimum required Bedrock permissions — including how to avoid the common AccessDeniedException gotcha — is covered in detail in the full course with working, tested code.
Phase 2: Weather Tool & Open-Meteo Integration
This phase defines the Weather_Tool specification as a Python dictionary conforming to Bedrock's toolSpec schema. You will write the fetch_weather_data function that calls Open-Meteo with latitude and longitude and returns the full current weather payload.
Key decisions:
Which Open-Meteo fields to request (temperature_2m, apparent_temperature, wind_speed_10m, weather_code, precipitation, humidity, cloud_cover, pressure_msl)
Error handling for Open-Meteo failures (non-200 responses, network timeouts)
Whether to parse and transform the raw JSON before returning it to the model, or pass it raw
Structuring the tool spec so Claude reliably infers coordinates for ambiguous location names (e.g., "Springfield") — including prompt engineering tricks to reduce hallucinated lat/lon — is covered in detail in the full course with working, tested code.
Phase 3: Bedrock Converse Loop
The core of the application: the process_weather_request function builds the initial conversation, sends it to Bedrock, and the recursive process_model_response handles the tool_use / end_turn loop.
Key decisions:
System prompt design — how to constrain the model to weather-only topics without making responses robotic
max_recursion depth — balancing safety (avoiding infinite tool calls) against complex multi-step queries
How to structure tool results in the user role message (toolResult.content must be a list of {json: ...} objects)
Error handling for each Bedrock exception: ValidationException, AccessDeniedException, ThrottlingException
The exact conversation structure that Bedrock requires for multi-turn tool use — including the precise JSON shape that causes silent failures if wrong — is covered in detail in the full course with working, tested code.
Phase 4: Django Views & AJAX Chat Interface
This phase wires everything into Django views and builds the chat UI. The index view renders the base template. The chat_results view accepts POST requests and returns JSON. The frontend uses the Fetch API to submit queries and render responses without page reloads.
Key decisions:
CSRF strategy — using {% csrf_token %} in the template and appending it to FormData (avoids the common 403 Forbidden error with Django's CSRF middleware)
Model selector UX — a dropdown that updates a hidden modelId field and shows a "model changed" confirmation message
Typing indicator timing — a loading bubble that remains visible during Bedrock's response latency (typically 2–6 seconds)
Handling Django's CSRF requirement with vanilla JS Fetch — including the exact csrfmiddlewaretoken FormData pattern that avoids 403 errors — is covered in detail in the full course with working, tested code.
Phase 5: Production Deployment
The final phase covers containerising the app with Docker, configuring Gunicorn and WhiteNoise for static file serving, and deploying to AWS Elastic Beanstalk or a PaaS like Render.
Key decisions:
ALLOWED_HOSTS and DEBUG=False configuration in production
Static file collection with collectstatic and WhiteNoise serving
Environment variable injection via Elastic Beanstalk's environment configuration (not hardcoded .env files)
Health check endpoint for load balancer configuration
Full Dockerfile, Procfile, Elastic Beanstalk configuration files, and a step-by-step deployment walkthrough are covered in detail in the full course with working, tested code.
Common Challenges
1. The Recursive Tool-Use Loop Hangs Indefinitely
Root cause: Bedrock returns stopReason: "tool_use" repeatedly if the tool result is malformed or the model cannot interpret it. Without a depth cap, the loop runs until a network timeout or memory exhaustion.
Fix: Implement a max_recursion guard (default 5) that returns an error response when exceeded. Log each recursion depth to CloudWatch so you can diagnose repeat tool calls in production.
2. Tool Results Rejected with ValidationException
Root cause: The toolResult message structure is unforgiving. The content field must be a list of objects with a json key — not a plain dict, not a string. Passing content: {"json": {...}} (dict instead of list) triggers a silent validation failure.
Fix: Always wrap tool result content as [{"json": tool_response}]. Validate the structure against Bedrock's API reference before running end-to-end tests.
3. AccessDeniedException on Model IDs
Root cause: Not all Bedrock model IDs are available in all regions, and model access must be explicitly enabled in the AWS console under "Bedrock > Model access". Claude 3 Opus, for instance, may be locked behind a request form.
Fix: Enable the specific model IDs you intend to use in the Bedrock console before writing any code. Catch AccessDeniedException explicitly and return a user-facing "model unavailable" message rather than a generic 500 error.
4. Open-Meteo Returns WMO Weather Codes, Not Descriptions
Root cause: Open-Meteo's weather_code field returns integers (e.g., 61 = "Slight rain") that are human-readable only with the WMO code table. The LLM must infer what the code means, which can lead to misinterpretation.
Fix: Either include a WMO code lookup table in the system prompt, or pre-process the Open-Meteo response in fetch_weather_data to add a weather_description field before returning data to the model. The latter is more reliable.
5. CSRF Middleware Blocks AJAX POSTs
Root cause: Django's CsrfViewMiddleware rejects POST requests that don't include a valid csrftoken. Vanilla JavaScript's Fetch API does not automatically append it.
Fix: Read {% csrf_token %} from the template into a hidden form field, then append it to FormData as csrfmiddlewaretoken. Alternatively, use the @csrf_exempt decorator on chat_results (acceptable for internal APIs, not recommended for public endpoints).
6. ThrottlingException Under Multi-User Load
Root cause: Amazon Bedrock enforces per-account token-per-minute (TPM) and requests-per-minute (RPM) quotas. A classroom or demo scenario with 10+ concurrent users can breach these limits.
Fix: Implement exponential backoff with jitter on ThrottlingException. For production, request a quota increase via the AWS Service Quotas console. Consider a Redis-based request queue to smooth traffic spikes.
7. Different Models Handle Tool Specs Differently
Root cause: Claude models reliably follow the toolSpec schema. Cohere Command R and Amazon Titan have subtly different expectations around tool invocation, and some models may attempt to answer weather questions from training data rather than invoking the tool.
Fix: Add explicit instructions in the system prompt: "You MUST use the Weather_Tool for all weather data. Never guess or fabricate weather information." Test each model ID individually before exposing it in the UI.
Solving these issues took us over 40 hours of testing — the course walks you through each fix with working code.
Ready to Build This Yourself?
Understanding the architecture is only half the battle. The gap between "I understand how this works" and "I have a deployed, working application" is where most developers get stuck. Bedrock's tool-use conversation structure, Django's CSRF handling, the recursive response loop, and the Elastic Beanstalk deployment pipeline each have non-obvious gotchas that can cost you days.
The full course on Codersarts Labs gives you everything you need to ship this app yourself:
✅ Full, commented source code for every file
✅ Step-by-step video tutorials covering each phase
✅ Docker setup for local development
✅ Tested Bedrock configurations for all 7 supported model IDs
✅ Complete deployment walkthrough (Elastic Beanstalk + Render)
✅ Lifetime access — download once, keep forever
✅ Free updates as AWS updates Bedrock's API
✅ Private community for Q&A and code reviews
Want to build this with a Codersarts engineer guiding you live? Book a 1:1 guided session for $20/hour — your environment, your questions, your timeline.
Conclusion
The AI Weather Chat Assistant is a clean, teachable demonstration of three powerful concepts working together: Amazon Bedrock's Converse API for multi-model LLM access, tool use for grounding AI responses in live data, and Django for rapid full-stack delivery. The architecture is intentionally stateless — each request is self-contained — which makes it trivially horizontally scalable.
If you're starting from zero, begin with Stack A: Django + Claude 3 Haiku via Bedrock + Open-Meteo on a Render free tier. You can have the core tool-use loop running in a weekend and a deployed chat interface by Monday. Once you understand the conversation structure, swapping in Claude 3 Sonnet or adding Redis caching is straightforward.
The full source code, video walkthroughs, and deployment guides are waiting for you at labs.codersarts.com. Build it, ship it, and add it to your portfolio.



Comments