Building AI Travel Planner with Ollama and MCP - Part 1
- ganesh90
- Jun 16
- 18 min read
Updated: Jul 7
Ever wished you had a friend who's traveled everywhere, knows all the hidden gems, and could instantly create the perfect itinerary for any destination? Well, grab your favorite coffee because we're about to build exactly that - an AI-powered travel planner that combines local AI processing with intelligent research to become your ultimate travel companion!

Picture this: You type "Create a 5-day Paris itinerary focusing on foodie experiences" and within moments, you get a complete, personalized travel plan with restaurant recommendations, activity schedules, and insider tips. No more hours of research, no more decision paralysis - just smart, tailored travel planning at your fingertips!
Today, we're diving into the fascinating world of AI travel planning, where we'll combine Ollama (for local AI processing), RAG (Retrieval-Augmented Generation) for intelligent research, and some seriously clever fallback systems. By the end of this journey, we'll have built a system that would make even seasoned travel agents jealous!
Dependencies and Setup - Gathering the Travel Tech Toolkit
Before we embark on our coding adventure, let's make sure we have all the right tools in our backpack. Think of this as packing for a tech expedition - we need the right gear to make our journey smooth and successful!
What We'll Need:
🐍 Python 3.8+ (especially 3.10.0) - Our trusty programming language (like our passport - essential!)
🦙 Ollama - Local AI processing powerhouse (download from https://ollama.com/download)
🧠 An Ollama Model - Download any Llama model:
ollama pull llama3 # or llama2, codellama, etc.
💾 About 8GB of free space - For the AI models and our travel data
Think of Ollama as hiring a brilliant local assistant who works entirely offline - no internet required once set up, completely private, and blazingly fast!
🖥️ GPU – At least 4 GB NVIDIA GPU required.
Code Walkthrough - Building Our AI Travel Empire
Creating and Configuring the Environment
First, we will install uv, which will help us create a working directory and add MCP.
To do that, run the following command:
pip install uv
Next, we will create a working directory named database-mcp:
uv init travel_planner-mcp
cd travel_planner-mcp
Now, we will create a virtual environment using this command:
python -m venv myenv
Then, we will activate the virtual environment:
For Windows:
myenv\Scripts\activate
For macOS and Linux:
source myenv/bin/activate
Next, we will add MCP to the project dependencies:
pip3 install "mcp[cli]"
Now, we are ready to create the Python files.
Alright, let's dive into the most exciting code adventure we've ever been on! We are going to show you every single line and explain it like we're best friends exploring a fascinating machine together. 🚀
The Travel Planner - Building an AI-Powered Travel Assistant with Ollama Integration
import asyncio
import json
import os
import random
import time
import re
from pathlib import Path
from typing import List, Dict, Any
import faiss
from sentence_transformers import SentenceTransformer
from mcp.server.fastmcp import FastMCP, Context
What's happening here? 🤔
Think of this section as unpacking your travel toolkit before an adventure. We're bringing together all the essential tools to build an intelligent travel planning assistant that can research destinations, create itineraries, and format them beautifully.
Core System Libraries:
import asyncio: Our multitasking travel coordinator! AsyncIO lets our server handle multiple planning requests simultaneously. Imagine a travel agent who can research hotels, check flights, and plan activities all at the same time without keeping anyone waiting.
import json: The universal travel document format! JSON helps us structure itineraries, research data, and AI responses in a format that's easy to parse and share. It's like having a standardized passport that works everywhere.
import os: Your system navigator that helps us find configuration files and data paths across different operating systems. Think of it as your GPS for the file system.
import random: The variety spice! This adds randomization to make each itinerary unique and interesting. It's like having a creative assistant who ensures no two trips are exactly the same.
import time: Our timestamp generator for creating unique session IDs and tracking when things happen. Essential for logging and debugging. It's like having a travel journal that automatically dates every entry.
import re: Regular expressions – our text parsing superhero! This becomes crucial for extracting JSON from AI responses and parsing structured data from text. It's like having a translator who can find and extract specific information from any document.
Advanced Libraries:
from pathlib import Path: Modern path handling that works seamlessly across all operating systems. It's like upgrading from paper maps to a smartphone navigation app!
from typing import List, Dict, Any: Type hints that make our code self-documenting. Like having clear labels on all your luggage so everyone knows what's inside.
AI and Search Libraries:
import faiss: Facebook's AI similarity search library – this is our semantic search engine! It helps us find relevant travel information based on meaning, not just keywords. Think of it as a librarian who understands what we are looking for, not just the exact words you use.
from sentence_transformers import SentenceTransformer: Converts text into mathematical representations (embeddings) that capture meaning. It's like having a universal translator that converts any language into a form our search engine understands.
The MCP Foundation:
from mcp.server.fastmcp import FastMCP, Context: The backbone of our travel planning service! FastMCP provides the framework for creating tools that AI assistants can use. Think of it as the infrastructure for building our digital travel agency.
This is like a master chef laying out all their ingredients and tools before creating a gourmet meal – organization is key to success!
Ollama Support - The Smart LLM Integration Strategy
try:
from langchain_ollama import ChatOllama
OLLAMA_AVAILABLE = True
print("✅ LangChain-Ollama loaded successfully")
except ImportError as e:
OLLAMA_AVAILABLE = False
print(f"❌ Missing Ollama package: {e}")
The Brilliant Fallback Pattern! 🛡️
This is like having a travel agency that can work with AI assistants when available but still provides excellent service using experienced human agents (rule-based systems) when AI isn't available:
The Try Block - Optimistic AI Loading:
from langchain_ollama import ChatOllama: Attempts to load the LangChain Ollama integration for AI-powered travel planning
OLLAMA_AVAILABLE = True: Sets a flag indicating "Yes! We have AI superpowers!"
print("✅ LangChain-Ollama loaded successfully"): Visual confirmation that AI agents are ready
The Except Block - Graceful Degradation:
except ImportError as e:: Catches the specific error when Ollama libraries aren't installed
OLLAMA_AVAILABLE = False: Sets the flag to indicate "We'll use rule-based planning, and that's perfectly fine!"
print(f"❌ Missing Ollama package: {e}"): Clear communication about what's missing
Why This Pattern is Best Practice:
No crashes: System continues working even without AI
Clear feedback: User knows exactly what capabilities are available
Easy upgrade path: Just install Ollama to enable AI features
Smart adaptation: Code checks this flag before attempting AI operations
TravelRAGSystem Class - Intelligent Travel Research Assistant
class TravelRAGSystem:
def __init__(self, data_path: str = "travel_data"):
self.data_path = Path(data_path)
self.model = SentenceTransformer('all-MiniLM-L6-v2')
self.index = None
self.documents = []
self.embeddings = None
This class is the heart of our retrieval system – think of it as building a smart travel research assistant that can instantly find relevant information about any destination.
Constructor Deep Dive:
self.data_path = Path(data_path)
Smart storage: Defines where travel data lives
Cross-platform: Path object works on any OS
Default value: Uses "travel_data" folder if not specified
self.model = SentenceTransformer('all-MiniLM-L6-v2'):
The brain: Converts text to mathematical vectors
Model choice: 'all-MiniLM-L6-v2' is fast and efficient
Semantic understanding: Captures meaning, not just keywords
Example: "Paris restaurants" and "dining in Paris" would be recognized as similar
self.index = None:
Search engine: Will hold our FAISS index for fast similarity search
Lazy initialization: Created when we have documents to index
Memory efficient: No resources used until needed
self.documents = []:
Knowledge base: Stores all our travel guides and articles
Flexible format: Can hold any travel-related documents
Dynamic: Can be updated with new information
self.embeddings = None:
Vector cache: Stores mathematical representations of documents
Performance optimization: Avoids re-computing embeddings
Space-time tradeoff: Uses memory to save computation
It's like building a smart library where books arrange themselves based on what we are looking for, and the librarian instantly knows which books are most relevant to your question!
📂 Travel Data Loader - Feeding Your RAG Engine with Real-World Knowledge
In our RAG system, documents are the fuel. The better we load and organize them, the more intelligent and accurate our semantic search becomes. Let’s break down how we initialize, parse, and load travel data from text files.
🧠 Initialization – Start the Data Loading Engine
async def initialize(self):
"""Initialize the RAG system with travel data from text files"""
await self.load_travel_data_from_files()
await self.build_index()
initialize is the starting point that prepares your RAG system.
It asynchronously loads travel documents using load_travel_data_from_files().
Once loaded, it builds a FAISS or semantic index using build_index() (defined elsewhere).
Think of this step as preheating the oven before baking your AI cake.
📄 Parsing Text Files – Extract Meaning from Structure
def parse_travel_file(self, file_path: Path) -> Dict:
"""Parse a single travel data text file"""
try:
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read().strip()
lines = content.split('\n')
if len(lines) < 4:
log(f"Invalid format in {file_path.name}: expected at least 4 lines", "WARN")
return None
# Parse the structured format
title = lines[0].strip()
city = lines[1].strip().lower()
category = lines[2].strip().lower()
description = '\n'.join(lines[3:]).strip()
return {
"title": title,
"city": city,
"category": category,
"content": description,
"source_file": file_path.name
}
except Exception as e:
log(f"Error parsing {file_path.name}: {e}", "ERROR")
return None
This function takes in a text file path (like paris_dining.txt) and extracts structured metadata from its contents.
Returns a dictionary containing key fields like title, city, category, and description.
🔍 Open and Read the File
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read().strip()
Opens the file in read mode.
Reads the content as a single string and removes any trailing whitespace.
📑 Split and Validate Format
lines = content.split('\n')
if len(lines) < 4:
log(f"Invalid format in {file_path.name}: expected at least 4 lines", "WARN")
return None
Breaks the text into lines.
Ensures the file contains at least 4 lines (title, city, category, content).
Logs a warning and skips files that don’t match the format.
📦 Extract Structured Fields
title = lines[0].strip()
city = lines[1].strip().lower()
category = lines[2].strip().lower()
description = '\n'.join(lines[3:]).strip()
Line 1 → title: A human-friendly name.
Line 2 → city: Location tag for search filtering.
Line 3 → category: Tag like "dining" or "attractions".
Remaining Lines → description: Main content of the travel guide.
✅ Return a Structured Document
return {
"title": title,
"city": city,
"category": category,
"content": description,
"source_file": file_path.name
}
Returns a dictionary representing a document to feed into the vector indexer.
🧯 Catch and Log Errors
except Exception as e:
log(f"Error parsing {file_path.name}: {e}", "ERROR")
return None
If something breaks (e.g., unreadable file), it logs an error with the filename and exception.
📥 Load All Files – Build the Complete Document Set
async def load_travel_data_from_files(self):
"""Load travel data from text files in the data folder"""
self.documents = []
# Create data directory if it doesn't exist
self.data_path.mkdir(exist_ok=True)
# Check if data directory has files
text_files = list(self.data_path.glob("*.txt"))
if not text_files:
log("No text files found in data directory, creating sample files...", "INFO")
await self.create_sample_data_files()
text_files = list(self.data_path.glob("*.txt"))
# Load all text files
for file_path in text_files:
log(f"Loading {file_path.name}...")
document = self.parse_travel_file(file_path)
if document:
self.documents.append(document)
log(f"Loaded: {document['title']} ({document['city']}, {document['category']})")
log(f"Loaded {len(self.documents)} travel documents from {len(text_files)} files", "SUCCESS")
This function loads all travel documents from disk into self.documents.
📁 Ensure the Data Directory Exists
self.data_path.mkdir(exist_ok=True)
Creates the data folder if it does not already exist (avoids crashing).
📂 Look for All .txt Files
text_files = list(self.data_path.glob("*.txt"))
Gathers all .txt files in the directory as Path objects.
🔄 Create Sample Files If None Are Found
if not text_files:
log("No text files found in data directory, creating sample files...", "INFO")
await self.create_sample_data_files()
text_files = list(self.data_path.glob("*.txt"))
If the directory is empty, it auto-generates demo data by calling create_sample_data_files().
Then it refreshes the file list.
📚 Load Each File
for file_path in text_files:
log(f"Loading {file_path.name}...")
document = self.parse_travel_file(file_path)
if document:
self.documents.append(document)
log(f"Loaded: {document['title']} ({document['city']}, {document['category']})")
Iterates over each file.
Parses it.
Appends it to self.documents if it was successfully parsed.
✅ Final Log Statement
log(f"Loaded {len(self.documents)} travel documents from {len(text_files)} files", "SUCCESS")
Prints how many documents were loaded and from how many files.
✨ Create Sample Data – Populate with Beautiful Examples
async def create_sample_data_files(self):
"""Create sample data files for demonstration"""
sample_files = [
{
"filename": "paris_attractions.txt",
"content": """Paris Iconic Landmarks and Museums
paris
attractions
The Eiffel Tower stands 330 meters tall with three observation levels offering panoramic views of Paris. Visit during sunset for magical golden hour photography. The Louvre Museum houses over 35,000 artworks including the Mona Lisa, Venus de Milo, and extensive Egyptian collections. Book timed entry tickets in advance. Notre-Dame Cathedral, currently under restoration, showcases Gothic architecture with flying buttresses and rose windows. Arc de Triomphe anchors the famous Champs-Élysées with 360-degree city views from the top. Sacré-Cœur Basilica crowns Montmartre hill, offering spiritual atmosphere and stunning vistas over the city."""
},
{
"filename": "paris_dining.txt",
"content": """Paris Culinary Experiences and Local Cuisine
paris
dining
Le Comptoir du Relais in Saint-Germain serves exceptional traditional French bistro cuisine in an intimate setting. Reservations essential. L'As du Fallafel in the Marais offers the city's best falafel with authentic Middle Eastern flavors. Café de Flore provides quintessential Parisian café culture with excellent coffee and people-watching opportunities. Du Pain et des Idées creates artisanal breads and pastries using traditional French techniques and organic ingredients. Pierre Hermé revolutionized the macaron with innovative flavor combinations and perfect texture. Visit the flagship store on Rue Bonaparte."""
},
{
"filename": "paris_activities.txt",
"content": """Paris Unique Experiences and Hidden Gems
paris
activities
Seine River cruise at sunset reveals Paris architecture from a unique water perspective, creating magical photo opportunities. Montmartre walking tour explores cobblestone streets where Picasso, Renoir, and Toulouse-Lautrec lived and worked. Latin Quarter food tour combines medieval history with culinary discoveries in the oldest part of Paris. Père Lachaise Cemetery offers peaceful walks among famous graves including Jim Morrison, Édith Piaf, and Oscar Wilde. Canal Saint-Martin boat rides show a different side of Paris away from tourist crowds, passing through historic locks and trendy neighborhoods."""
},
{
"filename": "tokyo_attractions.txt",
"content": """Tokyo Modern Marvels and Traditional Temples
tokyo
attractions
Tokyo Skytree offers breathtaking 360-degree views from 634 meters high, especially stunning at sunset. Senso-ji Temple in Asakusa, Tokyo's oldest temple, provides traditional atmosphere with incense, prayers, and traditional snacks. Shibuya Crossing, the world's busiest pedestrian intersection, creates an incredible urban spectacle. Meiji Shrine sits in a peaceful forest oasis in the heart of the city, dedicated to Emperor Meiji. Tsukiji Outer Market buzzes with fresh seafood, street food, and traditional cooking tools. Tokyo National Museum houses the world's largest collection of Japanese art and artifacts."""
},
{
"filename": "tokyo_dining.txt",
"content": """Tokyo Culinary Adventures from Street Food to Michelin Stars
tokyo
dining
Jiro's sushi represents the pinnacle of Japanese craftsmanship with decades of perfection in every piece. Reservations extremely difficult. Ramen Yokocho alleys offer authentic regional ramen styles from tonkotsu to miso. Try different shops to compare. Tsukiji fish market provides the freshest sashimi breakfast experience with tuna auctions and street food stalls. Izakayas in Golden Gai serve traditional drinking snacks and sake in tiny, atmospheric bars. Depachika food courts in department store basements offer incredible variety of prepared foods and sweets."""
},
{
"filename": "london_attractions.txt",
"content": """London Historic Landmarks and Royal Heritage
london
attractions
Tower of London houses the Crown Jewels and 1000 years of royal history with Yeoman Warder tours. London Eye provides spectacular views across the Thames and city skyline, especially beautiful at sunset. British Museum contains treasures from around the world including the Rosetta Stone and Egyptian mummies. Westminster Abbey, coronation site of British monarchs, showcases Gothic architecture and royal tombs. Buckingham Palace offers Changing of the Guard ceremony and opulent State Rooms during summer opening."""
}
]
for file_info in sample_files:
file_path = self.data_path / file_info["filename"]
with open(file_path, 'w', encoding='utf-8') as f:
f.write(file_info["content"])
log(f"Created sample file: {file_info['filename']}")
log(f"Created {len(sample_files)} sample data files", "SUCCESS")
This method creates rich, structured travel guides for Paris, Tokyo, and London.
🗃️ Define a List of Sample Files
sample_files = [
{
"filename": "paris_attractions.txt",
"content": """Paris Iconic Landmarks and Museums
paris
attractions
The Eiffel Tower stands 330 meters tall..."""
},
...
]
Each sample is a dictionary with a filename and content string.
These content strings are in the same structured format as expected by
parse_travel_file().
📝 Write Each File to Disk
for file_info in sample_files:
file_path = self.data_path / file_info["filename"]
with open(file_path, 'w', encoding='utf-8') as f:
f.write(file_info["content"])
log(f"Created sample file: {file_info['filename']}")
Iterates through each sample dictionary.
Writes the content to a .txt file inside self.data_path.
✅ Confirm Completion
log(f"Created {len(sample_files)} sample data files", "SUCCESS")
Logs how many files were successfully created.
FAISS Index Building - Creating Your Semantic Search Engine
async def build_index(self):
"""Build FAISS index from documents"""
if not self.documents:
return
texts = [f"{doc['title']} {doc['content']}" for doc in self.documents]
self.embeddings = self.model.encode(texts)
dimension = self.embeddings.shape[1]
self.index = faiss.IndexFlatL2(dimension)
self.index.add(self.embeddings.astype('float32'))
The Search Engine Constructor! 🔍
This method transforms our text documents into a searchable index using AI embeddings.
Safety Check:
if not self.documents: return
Prevents errors if no documents loaded
Defensive programming at its finest
Text Preparation:
texts = [f"{doc['title']} {doc['content']}"...]
Combines title and content for comprehensive search
List comprehension for efficiency
Embedding Generation:
self.embeddings = self.model.encode(texts)
Converts text to vectors (typically 384 dimensions)
Captures semantic meaning mathematically
Index Creation:
dimension = self.embeddings.shape[1]: Gets vector size
faiss.IndexFlatL2(dimension): Creates L2 distance index
index.add(...): Adds all embeddings to searchable index
It's like creating a GPS coordinate system for ideas – now we can find the "nearest" documents to any query based on meaning, not just matching words!
🔍 Search – Query the RAG System with Semantic Understanding
async def search(self, query: str, k: int = 5) -> List[Dict]:
"""Search for relevant documents"""
if not self.index:
log("No search index available", "WARN")
return []
query_embedding = self.model.encode([query])
distances, indices = self.index.search(query_embedding.astype('float32'), k)
results = []
for i, idx in enumerate(indices[0]):
if idx < len(self.documents):
doc = self.documents[idx].copy()
doc['relevance_score'] = float(distances[0][i])
results.append(doc)
log(f"Search for '{query}' returned {len(results)} results")
return results
This asynchronous method takes a user’s query (str) and returns the top k relevant documents.
k defaults to 5, meaning it will return the top 5 most relevant matches.
🚫 Check for Index Availability
if not self.index:
log("No search index available", "WARN")
return []
If the semantic index (e.g., FAISS) is not built or initialized, log a warning and return an empty list.
🧠 Convert Query into Vector
query_embedding = self.model.encode([query])
Converts the input query into an embedding (vector representation) using the same model used to encode documents.
📏 Find the Closest Matches
distances, indices = self.index.search(query_embedding.astype('float32'), k)
Searches the FAISS index using the query embedding.
Returns the k closest document indices and their distances (lower is better).
📄 Collect and Annotate Results
results = []
for i, idx in enumerate(indices[0]):
if idx < len(self.documents):
doc = self.documents[idx].copy()
doc['relevance_score'] = float(distances[0][i])
results.append(doc)
Iterates through the returned indices.
Retrieves the original document using its index from self.documents.
Adds a relevance_score to indicate how close it was to the query.
Appends it to the results list.
✅ Log and Return Final Output
log(f"Search for '{query}' returned {len(results)} results")
return results
Logs how many results were found.
Returns the final list of most relevant documents.
🏙️ Get Available Cities – Filter with Geographic Awareness
def get_cities(self) -> List[str]:
"""Get list of available cities"""
cities = set(doc['city'] for doc in self.documents)
return sorted(cities)
Creates a set of all unique city names from the documents (e.g., paris, tokyo).
Converts it to a sorted list to maintain consistent output order.
🗂️ Get Available Categories – Filter by Document Type
def get_categories(self) -> List[str]:
"""Get list of available categories"""
categories = set(doc['category'] for doc in self.documents)
return sorted(categories)
Extracts all unique category values (like dining, attractions) from the documents.
Returns a sorted list to support dropdowns or filters in UI components.
🧭 Filter by City – Retrieve All Documents for a Specific Location
def get_documents_by_city(self, city: str) -> List[Dict]:
"""Get all documents for a specific city"""
return [doc for doc in self.documents if doc['city'].lower() == city.lower()]
Filters the documents where the city field (case-insensitive) matches the input.
Useful for location-specific queries or interface filters.
🧾 Filter by Category – Retrieve All Documents for a Specific Type
def get_documents_by_category(self, category: str) -> List[Dict]:
"""Get all documents for a specific category"""
return [doc for doc in self.documents if doc['category'].lower() == category.lower()]
Filters documents where the category field matches the specified value (like activities).
Enables category-based content browsing or filtering.
DirectOllamaAgents Class - Your AI-Powered Travel Planning Team
class DirectOllamaAgents:
def __init__(self, rag_system: TravelRAGSystem):
self.rag_system = rag_system
self.use_ollama = False
self.llm = None
self.setup_ollama()
Building Your AI Travel Agency! 🤖
This class manages the AI agents that create personalized travel plans. It's designed to work with or without AI, ensuring reliable service.
Instance Variables Explained:
self.rag_system: Reference to our research system
Enables AI to access travel knowledge
Provides context for better recommendations
self.use_ollama: Feature flag for AI availability
Starts as False (pessimistic default)
Set to True only after successful setup
self.llm: The language model instance
Holds our AI assistant when available
Remains None if Ollama isn't working
self.setup_ollama(): Immediate initialization attempt
Runs automatically on creation
Sets up AI if available
Ollama Configuration
def setup_ollama(self):
"""Setup Ollama with smart model selection"""
safe_print("[INFO] Setting up Ollama with smart model selection...")
if not OLLAMA_AVAILABLE:
safe_print("[ERROR] Ollama not available - using rule-based agents")
return
try:
# Test Ollama connection
import requests
response = requests.get("http://localhost:11434/api/tags", timeout=3)
if response.status_code != 200:
safe_print("[ERROR] Ollama server not responding")
return
models = response.json().get("models", [])
model_names = [m.get("name", "") for m in models]
# Smart model selection - prefer quality over size for travel content
# Only using your installed models
model_priority = [
# "llama3.2:3b", # Best balance for travel content (2GB)
"llama3.2:1b", # Fast and efficient (1.3GB)
"llama3:latest" # Highest quality but larger (4.7GB)
]
selected_model = None
for preferred in model_priority:
if preferred in model_names:
selected_model = preferred
safe_print(f"[INFO] Selected preferred model: {selected_model}")
break
if not selected_model:
# Fallback to any llama model
llama_models = [name for name in model_names if "llama" in name.lower()]
if llama_models:
selected_model = llama_models[0]
safe_print(f"[INFO] Using fallback model: {selected_model}")
if not selected_model:
safe_print("[ERROR] No suitable Llama models found")
return
# Configure settings based on model size
if "1b" in selected_model:
# Settings for 1B model - faster but simpler
settings = {
"temperature": 0.8,
"num_predict": 150,
"num_ctx": 2048,
"num_thread": 2
}
safe_print("[INFO] Using 1B model settings - optimized for speed")
elif "3b" in selected_model:
# Settings for 3B model - balanced
settings = {
"temperature": 0.7,
"num_predict": 200,
"num_ctx": 4096,
"num_thread": 4
}
safe_print("[INFO] Using 3B model settings - balanced quality/speed")
else:
# Settings for larger models (7B+)
settings = {
"temperature": 0.7,
"num_predict": 250,
"num_ctx": 4096,
"num_thread": 6
}
safe_print("[INFO] Using large model settings - optimized for quality")
# Create LLM with model-specific settings
self.llm = ChatOllama(
model=selected_model,
top_p=0.9,
timeout=60,
stop=["</end>", "\n---"],
repeat_penalty=1.1,
mirostat=1,
mirostat_eta=0.1,
mirostat_tau=5.0,
**settings # Apply model-specific settings
)
self.use_ollama = True
safe_print(f"[SUCCESS] {selected_model} ready for travel planning!")
except Exception as e:
safe_print(f"[ERROR] Ollama setup failed: {e}")
self.use_ollama = False
This method sets up that direct connection with Ollama, picks the best model, and even runs a test prompt to make sure it’s ready to help you plan epic travel adventures.
⚙️ Kick Off with a Friendly Log
print("🦙 Setting up Direct Ollama integration...")
Nothing fancy — just a friendly log that announces we're getting the llama warmed up. Logging like this makes your code feel alive and helps you (or your team) debug easily.
🚦 Check If Ollama Is Installed
if not OLLAMA_AVAILABLE:
print("❌ Ollama not available - using rule-based agents")
return
Before we go any further, we check if OLLAMA_AVAILABLE is True. Maybe we are in an environment where Ollama isn’t installed — so we fall back gracefully to rule-based logic.
🔐 Pro Tip: Defensive programming like this saves you from runtime errors and gives users a fallback.
🧪 Wrap the Whole Thing in a Try Block
try:
We're about to make a network call, parse JSON, and initialize a model — so we wrap it in a try block to keep things safe and error-resistant.
🛠️ Import Requests and Ping the Local Ollama Server
import requests
response = requests.get("http://localhost:11434/api/tags", timeout=5)
if response.status_code != 200:
print("❌ Ollama server not responding")
return
We make a quick HTTP request to see if Ollama is running and responsive. The /api/tags endpoint lists models that Ollama has pulled locally.
🕵️♀️ It’s like calling your AI friend to see if they’re awake and asking what books they have on the shelf.
🧾Look Through the Available Models
models = response.json().get("models", [])
llama_models = [m for m in models if "llama" in m.get("name", "").lower()]
We parse the JSON response and look for models with names that include "llama". Why? Because we want to use a LLaMA-based model for best results — they’re powerful and run well locally.
🧠 Fun Fact: LLaMA stands for Large Language Model Meta AI — a family of open models released by Meta.
🚫 Handle No-Model Edge Case
if not llama_models:
print("❌ No Llama models found")
return
If we didn’t find any valid models, we log an error and skip the rest. There’s no point continuing without a brain to talk to.
🎯 Select a Model
model_name = llama_models[0]["name"]
print(f"🎯 Using Ollama model: {model_name}")
We pick the first model from the list (you could improve this by letting users choose) and log which one we’re using.
🎛️ Pro Tip: If we are supporting multiple models, you could allow configuration via a CLI flag or UI dropdown.
🤖 Initialize the ChatOllama LLM
self.llm = ChatOllama(
model=model_name,
temperature=0.8,
top_p=0.9,
timeout=60,
num_predict=500
)
This is where the magic happens — we instantiate the chat model with:
temperature=0.8: makes output more creative
top_p=0.9: limits randomness while allowing flexibility
timeout=60: gives it up to a minute to respond
num_predict=500: lets the model write long-form replies (great for itineraries!)
🧪 Pro Tip: For deterministic outputs like API calls or JSON, lower the temperature (0.2–0.4).
💬 Run a Quick Sanity Test
test_response = self.llm.invoke("Say 'Direct Ollama ready for travel planning!'")
print(f"✅ Direct Ollama test: {test_response.content}")
We send a simple prompt and print the reply. This confirms the model is working and responding correctly.
🛠️ It’s like asking your GPS to say “I’m ready” before starting the trip.
✅ Mark It as Ready to Use
self.use_ollama = True
print("🎉 Direct Ollama agents ready!")
Everything worked, so we set a flag and throw some virtual confetti. From here on, we can use Ollama to handle actual AI requests.
❌ Handle Failures Gracefully
except Exception as e:
print(f"❌ Ollama setup failed: {e}")
self.use_ollama = False
If anything failed along the way, we catch the error and log it — and disable Ollama usage until it’s fixed.
🔁 Recovery Tip: You could suggest users run ollama serve or check their model installation if this fails.
With Ollama set up, we are now ready to plug it into your research, itinerary generation, or travel advice systems!
Part 2 will be available soon at the following link: https://www.codersarts.com/post/building-ai-travel-planner-with-ollama-and-mcp-part-2
Transform Your AI Workflows with Codersarts
Whether you're building intelligent systems with MCP, implementing RAG for smart information retrieval, or developing robust multi-agent architectures, the experts at Codersarts are here to support your vision. From academic prototypes to enterprise-grade solutions, we provide:
Custom RAG Implementation: Build retrieval-augmented generation systems tailored to your domain
MCP-Based Agent Systems: Design and deploy modular, coordinated AI agents with FastMCP
Semantic Search with FAISS: Implement efficient vector search for meaningful content discovery
End-to-End AI Development: From setup and orchestration to deployment and optimization
Do not let architectural complexity or tooling challenges slow down your progress. Partner with Codersarts and bring your next-generation AI systems to life.
Ready to get started? Visit Codersarts.com or connect with our team to discuss your MCP or RAG-based project.The future of modular, intelligent automation is here – let’s build it together!

Comentarios