Building a Database MCP Server: The Ultimate Code Walkthrough - Part 1
- ganesh90
- Jun 13
- 15 min read
Updated: Jul 11
Picture this: You're at your computer, juggling three different databases for three different projects. You keep switching connection strings, remembering which database is for which project, and constantly worried about accidentally running queries on the wrong database. Sound familiar? Well, grab your favorite coding beverage because we're about to build something that'll make your database life infinitely easier! ☕

What if I told you we could create a super-intelligent database butler that automatically knows which database you want to work with just by looking at the file path you provide? No more connection string juggling, no more "Oops, wrong database!" moments, and definitely no more headaches!
Welcome to the world of Database MCP Servers – where AI meets databases in the most elegant way possible! 🚀
What Are We Building Today?
Imagine having a digital assistant that's like the Swiss Army knife of database operations. This isn't just another database tool – it's an intelligent system that can:
🔄 Manage multiple databases simultaneously without breaking a sweat
🛡️ Protect your data with built-in security features and confirmation systems
⚡ Reuse connections efficiently through smart connection pooling
🤖 Understand complex queries and prevent dangerous operations
Think of it as hiring the world's most competent database administrator who never sleeps, never makes mistakes, and can handle unlimited databases at once!
Dependencies and Setup: Assembling Your Database Toolkit
Before we dive into our coding adventure, let's gather all the magical ingredients for our database recipe. Think of this like preparing for an epic cooking session – you want everything ready before the fun begins!
Hardware Requirements & Setup
Minimum Requirements:
4GB+ RAM
Any modern CPU (no GPU needed!)
1GB+ free disk space
Project Structure:
your-database-project/
├── database_query_server.py
├── data/
├── sample_database.db
└── your_database.db
Pro Tip: Start with SQLite databases – they're just files on your computer! No complicated PostgreSQL server setup required. You can literally create a database by just specifying a filename! 🎯
Code Walkthrough: Building Your AI Database Empire
Creating and Configuring the Environment
First, we will install uv, which will help us create a working directory and add MCP.
To do that, run the following command:
pip install uv
Next, we will create a working directory named database-mcp:
uv init database-mcp
cd database-mcp
Now, we will create a virtual environment using this command:
python -m venv myenv
Then, we will activate the virtual environment:
For Windows:
myenv\Scripts\activate
For macOS and Linux:
source myenv/bin/activate
Next, we will add MCP to the project dependencies:
pip install "mcp[cli]
Now, we are ready to create the Python files.
Alright, let's dive into the most exciting code adventure you've ever been on! We are going to show you every single line and explain it like we're best friends exploring a fascinating machine together. 🚀
Data Preparation - Creating Your Perfect Test Database! 🗄️
Before we dive into our amazing MCP server, let's create a realistic database to work with! This section will show you how to build a comprehensive sample database that's perfect for testing all our database tools.
import sqlite3
from datetime import datetime
def create_database_from_sql():
"""Create database matching the provided SQL structure and data"""
# Connect to the SQLite database (creates file if doesn't exist)
conn = sqlite3.connect('sample_data.db')
cursor = conn.cursor()
# Enable foreign keys
cursor.execute('PRAGMA foreign_keys = ON')
# Create Users table
cursor.execute('''
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
age INTEGER,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
''')
# Create Products table
cursor.execute('''
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(200) NOT NULL,
description TEXT,
price DECIMAL(10, 2),
category VARCHAR(50),
stock_quantity INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
''')
# Create Orders table
cursor.execute('''
CREATE TABLE IF NOT EXISTS orders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
product_id INTEGER,
quantity INTEGER NOT NULL,
total_amount DECIMAL(10, 2),
status VARCHAR(20) DEFAULT 'pending',
order_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id),
FOREIGN KEY (product_id) REFERENCES products(id)
)
''')
# Sample user data
users_data = [
(1, 'John Doe', 'john@example.com', 30, '2025-06-10 08:05:51'),
(2, 'Jane Smith', 'jane@example.com', 25, '2025-06-10 08:05:51'),
(3, 'Bob Johnson', 'bob@example.com', 35, '2025-06-10 08:05:51'),
(4, 'Alice Brown', 'alice@example.com', 28, '2025-06-10 08:05:51'),
(5, 'Charlie Wilson', 'charlie@example.com', 42, '2025-06-10 08:05:51')
]
# Insert users with specific IDs
for user in users_data:
cursor.execute('''
INSERT INTO users (id, name, email, age, created_at)
VALUES (?, ?, ?, ?, ?)
''', user)
# Sample product data
products_data = [
(1, 'Laptop', 'High-performance laptop', 999.99, 'Electronics', 50, '2025-06-10 08:05:51'),
(2, 'Coffee Mug', 'Ceramic coffee mug', 12.99, 'Kitchen', 200, '2025-06-10 08:05:51'),
(3, 'Book', 'Programming guide', 29.99, 'Books', 100, '2025-06-10 08:05:51'),
(4, 'Headphones', 'Wireless headphones', 149.99, 'Electronics', 75, '2025-06-10 08:05:51'),
(5, 'Desk Chair', 'Ergonomic office chair', 299.99, 'Furniture', 25, '2025-06-10 08:05:51'),
(7, 'Fixed Premium Tea', 'Organic Premium Tea', 29.99, 'Food', 50, '2025-06-10 09:15:04')
]
# Insert products with specific IDs
for product in products_data:
cursor.execute('''
INSERT INTO products (id, name, description, price, category, stock_quantity, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
''', product)
# Sample orders data
orders_data = [
(1, 1, 1, 1, 999.99, 'completed', '2025-06-10 08:05:51'),
(2, 2, 2, 2, 25.98, 'pending', '2025-06-10 08:05:51'),
(3, 3, 3, 1, 29.99, 'completed', '2025-06-10 08:05:51'),
(5, 4, 5, 1, 299.99, 'pending', '2025-06-10 08:05:51'),
(6, 5, 7, 2, 59.98, 'pending', '2025-06-10 09:15:04'),
(8, 1, 4, 3, 38.97, 'processing', '2025-06-12 05:10:55')
]
# Insert orders with specific IDs
for order in orders_data:
cursor.execute('''
INSERT INTO orders (id, user_id, product_id, quantity, total_amount, status, order_date)
VALUES (?, ?, ?, ?, ?, ?, ?)
''', order)
# Update the sqlite_sequence table to match the SQL file
cursor.execute("UPDATE sqlite_sequence SET seq = 13 WHERE name = 'users'")
cursor.execute("UPDATE sqlite_sequence SET seq = 11 WHERE name = 'products'")
cursor.execute("UPDATE sqlite_sequence SET seq = 11 WHERE name = 'orders'")
# Commit changes
conn.commit()
print("Database created successfully!")
print(f"Inserted {len(users_data)} users")
print(f"Inserted {len(products_data)} products")
print(f"Inserted {len(orders_data)} orders")
return conn
def query_examples(conn):
"""Demonstrate some sample queries"""
cursor = conn.cursor()
print("\n" + "="*60)
print('SAMPLE QUERIES')
print("="*60)
# All users
print("\n1. All Users:")
cursor.execute('SELECT * FROM users')
for row in cursor.fetchall():
print(row)
# All products
print("\n2. All Products:")
cursor.execute('SELECT * FROM products')
for row in cursor.fetchall():
print(row)
# All orders
print("\n3. All Orders:")
cursor.execute('SELECT * FROM orders')
for row in cursor.fetchall():
print(row)
# Orders with user and product details
print("\n4. Orders with User and Product Details:")
cursor.execute('''
SELECT o.id, u.name as user_name, p.name as product_name,
o.quantity, o.total_amount, o.status, o.order_date
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
ORDER BY o.order_date DESC
''')
for row in cursor.fetchall():
print(row)
# Total sales by product category
print("\n5. Total Sales by Product Category:")
cursor.execute('''
SELECT p.category, SUM(o.total_amount) as total_sales, COUNT(o.id) as order_count
FROM orders o
JOIN products p ON o.product_id = p.id
WHERE o.status != 'cancelled'
GROUP BY p.category
ORDER BY total_sales DESC
''')
for row in cursor.fetchall():
print(row)
def main():
# Create the database and tables
conn = create_database_from_sql()
# Run example queries
query_examples(conn)
# Close connection
conn.close()
print(f"\nDatabase saved as 'sample_data.db'")
if __name__ == "__main__":
main()
Foundation Setup - Your Workshop Preparation
import sqlite3
from datetime import datetime
def create_database_from_sql():
"""Create database matching the provided SQL structure and data"""
# Connect to the SQLite database (creates file if doesn't exist)
conn = sqlite3.connect('sample_data.db')
cursor = conn.cursor()
The Foundation Builder! 🏗️
This opening section is like setting up your workshop before building something amazing:
import sqlite3: Python's built-in SQLite interface - no installation required!
from datetime import datetime, timedelta: For creating realistic timestamps across different time periods
import random: To generate varied, realistic test data that mimics real business patterns
sqlite3.connect('sample_database.db'): Creates a new database file or connects to existing one
cursor = conn.cursor(): Creates our SQL command executor
PRAGMA foreign_keys = ON: Enables referential integrity - critical for professional databases!
Pro Tip: 💡 SQLite automatically creates the database file if it doesn't exist - like magic! ✨💡 SQLite automatically creates the database file if it doesn't exist - like magic! ✨
Table Creation - The Database Blueprint
Users Table - Your Customer Foundation
# Create Users table
cursor.execute('''
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
age INTEGER,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
''')
The Customer Registry Design:
CREATE TABLE IF NOT EXISTS: Safe creation that won't crash if table already exists
id INTEGER PRIMARY KEY AUTOINCREMENT: Auto-incrementing unique ID (like a customer number)
name VARCHAR(100) NOT NULL: Required name field with reasonable length limit
email VARCHAR(100) UNIQUE NOT NULL: Email must be unique and required (prevents duplicates)
age INTEGER: Optional age field (some users might not provide it)
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP: Automatically tracks when user was created
Like designing a user registration form with all the essential fields and built-in validation!
Products Table - Your Inventory Catalog
# Create Products table
cursor.execute('''
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(200) NOT NULL,
description TEXT,
price DECIMAL(10, 2),
category VARCHAR(50),
stock_quantity INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
''')
The Product Catalog Design:
id INTEGER PRIMARY KEY AUTOINCREMENT: Unique product identifier
name VARCHAR(200) NOT NULL: Product name is required with generous length
description TEXT: Unlimited text for detailed product descriptions
price DECIMAL(10, 2): Precise decimal for money (supports up to $99,999,999.99)
category VARCHAR(50): Product categorization for organization
stock_quantity INTEGER DEFAULT 0: Inventory tracking with safe default
created_at TIMESTAMP: Product creation tracking
Why DECIMAL(10,2)? Perfect for currency - stores exactly 2 decimal places without floating-point errors!
Orders Table - The Transaction Records
# Create Orders table
cursor.execute('''
CREATE TABLE IF NOT EXISTS orders (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER,
product_id INTEGER,
quantity INTEGER NOT NULL,
total_amount DECIMAL(10, 2),
status VARCHAR(20) DEFAULT 'pending',
order_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id),
FOREIGN KEY (product_id) REFERENCES products(id)
)
''')
The Transaction Ledger Design:
id INTEGER PRIMARY KEY AUTOINCREMENT: Unique order number
user_id INTEGER: Links to users table
product_id INTEGER: Links to products table
quantity INTEGER NOT NULL: How many items were ordered
total_amount DECIMAL(10, 2): Calculated total cost
status VARCHAR(20) DEFAULT 'pending': Order status with sensible default
FOREIGN KEY constraints: Maintain referential integrity
The Relationship Magic: These foreign keys create a web of connections:
One user can have many orders
One product can be in many orders
Each order belongs to one user and one product
Database enforces these relationships automatically!
User Data Generation
# Sample user data (matching the SQL file)
users_data = [
(1, 'John Doe', 'john@example.com', 30, '2025-06-10 08:05:51'),
(2, 'Jane Smith', 'jane@example.com', 25, '2025-06-10 08:05:51'),
(3, 'Bob Johnson', 'bob@example.com', 35, '2025-06-10 08:05:51'),
(4, 'Alice Brown', 'alice@example.com', 28, '2025-06-10 08:05:51'),
(5, 'Charlie Wilson', 'charlie@example.com', 42, '2025-06-10 08:05:51')
]
The User Data Strategy:
Diverse names: Represents different demographics and backgrounds
Age variety: Spans different adult age groups (25-42)
Complete information: All required fields populated
Tuple format: Efficient for bulk insertion operations
Smart User Insertion
# Insert users with specific IDs
for user in users_data:
cursor.execute('''
INSERT INTO users (id, name, email, age, created_at)
VALUES (?, ?, ?, ?, ?)
''', user)
datetime.now(): Gets the current date and time
timedelta(days=random.randint(1, 180)): Creates a random number of days between 1-180
datetime.now() - timedelta(...): Subtracts random days to create past dates
user + (created_date.strftime(...),): Adds the random date to each user tuple
Parameterized query: Safe from SQL injection with ? placeholders
.strftime('%Y-%m-%d %H:%M:%S'): Formats datetime to database-compatible string
Why random dates? Makes your data feel real - like users actually signed up over time throughout the past 6 months!
Product Catalog Creation - A Complete Store Inventory
# Sample product data
products_data = [
(1, 'Laptop', 'High-performance laptop', 999.99, 'Electronics', 50, '2025-06-10 08:05:51'),
(2, 'Coffee Mug', 'Ceramic coffee mug', 12.99, 'Kitchen', 200, '2025-06-10 08:05:51'),
(3, 'Book', 'Programming guide', 29.99, 'Books', 100, '2025-06-10 08:05:51'),
(4, 'Headphones', 'Wireless headphones', 149.99, 'Electronics', 75, '2025-06-10 08:05:51'),
(5, 'Desk Chair', 'Ergonomic office chair', 299.99, 'Furniture', 25, '2025-06-10 08:05:51'),
(7, 'Fixed Premium Tea', 'Organic Premium Tea', 29.99, 'Food', 50, '2025-06-10 09:15:04')
]
Identical categories: Categories match original classification exactly
Varied timestamps: Most products share timestamp, but Premium Tea has different time '2025-06-10 09:15:04'
Precise Product Insertion
# Insert products with specific IDs
for product in products_data:
cursor.execute('''
INSERT INTO products (id, name, description, price, category, stock_quantity, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
''', product)
he Exact Product Insertion Logic:
Seven-parameter insertion: Matches complete product tuple structure
ID-controlled insertion: Maintains exact ID sequences including gaps
Preserved timestamps: Keeps original temporal patterns
Complete data preservation: Every field matches original exactly
Precise Order Data Matching
# Sample orders data
orders_data = [
(1, 1, 1, 1, 999.99, 'completed', '2025-06-10 08:05:51'),
(2, 2, 2, 2, 25.98, 'pending', '2025-06-10 08:05:51'),
(3, 3, 3, 1, 29.99, 'completed', '2025-06-10 08:05:51'),
(5, 4, 5, 1, 299.99, 'pending', '2025-06-10 08:05:51'),
(6, 5, 7, 2, 59.98, 'pending', '2025-06-10 09:15:04'),
(8, 1, 4, 3, 38.97, 'processing', '2025-06-12 05:10:55')
]
The Exact Order Recreation Strategy:
Specific order IDs: (1, 2, 3, 5, 6, 8) - note missing IDs 4 and 7, exactly matching original
Precise relationships: Each order links exact user_id to exact product_id
Exact quantities: Order quantities match original data precisely
Calculated totals: Total amounts match original financial calculations exactly
Status preservation: Order statuses ('completed', 'pending', 'processing') match exactly
Temporal accuracy: Timestamps preserve original order timing patterns
Controlled Order Insertion
# Insert orders with specific IDs
for order in orders_data:
cursor.execute('''
INSERT INTO orders (id, user_id, product_id, quantity, total_amount, status, order_date)
VALUES (?, ?, ?, ?, ?, ?, ?)
''', order)
The Precision Transaction Recording:
Seven-parameter insertion: Complete order data with exact ID control
Relationship preservation: Maintains exact foreign key relationships
Financial accuracy: Preserves exact monetary values from original
Status integrity: Maintains exact order status from original data
Database Sequence Control - The Precision Calibration
cursor.execute("UPDATE sqlite_sequence SET seq = 13 WHERE name = 'users'") cursor.execute("UPDATE sqlite_sequence SET seq = 11 WHERE name = 'products'") cursor.execute("UPDATE sqlite_sequence SET seq = 11 WHERE name = 'orders'")
The Sequence Calibration Process:
sqlite_sequence table: SQLite's internal table that tracks auto-increment values
Users sequence = 13: Next user ID will be 14 (after existing IDs 1-5)
Products sequence = 11: Next product ID will be 12 (after existing IDs 1-7, accounting for gaps)
Orders sequence = 11: Next order ID will be 12 (after existing IDs 1-8, accounting for gaps)
Why sequence control matters:
Future insertions: New records will have correct auto-increment values
Gap preservation: Maintains exact ID patterns from original database
Consistency: Ensures new data follows original numbering scheme
Database Finalization - Precision Completion
The Random Order Factory
# Commit changes
conn.commit()
print("Database created successfully!")
print(f"Inserted {len(users_data)} users")
print(f"Inserted {len(products_data)} products")
print(f"Inserted {len(orders_data)} orders")
return conn
The Precision Finalization Sequence:
conn.commit(): CRITICAL - Makes all precise changes permanent to disk
Exact reporting: Shows exactly what was recreated (5 users, 6 products, 6 orders)
return conn: Returns connection for immediate validation queries
⚠️ Critical Warning: Without conn.commit(), all your precise replication work disappears! It's like saving a precision blueprint - absolutely essential! 💾
Query Examples - Validation and Analysis
def query_examples(conn):
"""Demonstrate some sample queries"""
cursor = conn.cursor()
print("\n" + "="*60)
print('SAMPLE QUERIES')
print("="*60)
The Validation Framework:
Separate function: Clean organization for database validation
Comprehensive testing: Multiple query types to verify data integrity
Visual formatting: Professional output for easy reading
Query 1: User Data Verification
# All users
print("\n1. All Users:")
cursor.execute('SELECT * FROM users')
for row in cursor.fetchall():
print(row)
The User Registry Verification:
Complete user listing: Verifies all 5 users were inserted correctly
ID sequence check: Confirms IDs 1-5 are present
Data integrity: Validates names, emails, ages, and timestamps match exactly
Query 2: Product Catalog Validation
# All products
print("\n2. All Products:")
cursor.execute('SELECT * FROM products')
for row in cursor.fetchall():
print(row)
The Product Inventory Verification:
Complete product listing: Shows all 6 products with exact details
Price accuracy: Validates exact pricing from original data
Query 3: Order Transaction Audit
# All orders
print("\n3. All Orders:")
cursor.execute('SELECT * FROM orders')
for row in cursor.fetchall():
print(row)
The Transaction History Verification:
Complete order audit: Shows all 6 orders with exact relationships
ID gap confirmation: Verifies missing order IDs (4,7) match original
Financial accuracy: Confirms exact monetary values preserved
Main Execution Flow - The Orchestrator
def main():
# Create the database and tables
conn = create_database_from_sql()
# Run example queries
query_examples(conn)
# Close connection
conn.close()
print(f"\nDatabase saved as 'sample_data.db'")
if __name__ == "__main__":
main()
The Complete Workflow:
conn = create_database_from_sql(): Executes exact database replication with all precision controls
query_examples(conn): Validates recreated database with comprehensive testing
conn.close(): Properly closes database connection with all changes committed
File confirmation: Confirms exact location of recreated database file
if name == "__main__": Ensures script runs only when executed directly
This precision database recreation script gives you exact replication capabilities perfect for development environment setup, testing database migration, and ensuring consistent data across systems! 🚀
Output

PostgresSQL
To use this database in PostgreSQL, first open the pgAdmin. Then, connect to the server. In our case, it is "PostgreSQL 17". Next, right click on the Databases section and provide a name for the new database. In our case, it will be named 'original_database'.

After that, use this code to fill the data:
import sqlite3
import pandas as pd
from sqlalchemy import create_engine
# Connect to SQLite
sqlite_conn = sqlite3.connect('data/sample_database.db')
# Create SQLAlchemy engine for PostgreSQL
pg_engine = create_engine('postgresql://username:password@localhost:port/database_name')
# pg_engine = create_engine('postgresql://username:password@localhost:5432/sample_database')
# Get all tables
cursor = sqlite_conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
print(f"Found {len(tables)} tables to migrate:")
# Copy each table
for table in tables:
table_name = table[0]
print(f"Migrating table: {table_name}")
df = pd.read_sql_query(f"SELECT * FROM {table_name}", sqlite_conn)
df.to_sql(table_name, pg_engine, if_exists='replace', index=False, method='multi')
print(f"✓ {table_name} migrated ({len(df)} rows)")
print("Migration completed!")
# Close connections
sqlite_conn.close()
pg_engine.dispose()
This powerful script is like having a professional moving company for your database - it safely transfers all your data from SQLite to PostgreSQL with precision and care. Let's break down every line and understand the magic behind database migration!
Import Section - Assembling Your Migration Toolkit
import sqlite3
import pandas as pd
from sqlalchemy import create_engine
The Migration Arsenal! 🛠️
Each import serves a specific purpose in our data migration pipeline:
import sqlite3: Python's built-in SQLite interface for reading from our source database
import pandas as pd: The data manipulation powerhouse that acts as our "universal translator" between databases
from sqlalchemy import create_engine: The sophisticated database connection manager that speaks fluent PostgreSQL
It's like having a translator (pandas), a truck driver (sqlite3), and a logistics coordinator (sqlalchemy) for your moving day!
Source Database Connection - Opening the Vault
# Connect to SQLite
sqlite_conn = sqlite3.connect('data/sample_database.db')
The Source Database Access:
sqlite3.connect('data/store_data.db'): Creates a connection to your SQLite database file
File path: 'data/store_data.db' - assumes your database is in a 'data' subdirectory
Connection object: sqlite_conn becomes your gateway to read all the source data
Why this works: SQLite databases are just files, so connecting is as simple as pointing to the right file path!
Destination Database Connection - The Modern Warehouse
# Create SQLAlchemy engine for PostgreSQL
pg_engine = create_engine('postgresql://postgres:@localhost:5432/original_database')
The Destination Database Setup:
create_engine(): SQLAlchemy's sophisticated connection factory
Connection string breakdown: 'postgresql://postgres:@localhost:5432/original_database'
postgresql://: Protocol - tells SQLAlchemy we're connecting to PostgreSQL
postgres:: Username - 'postgres' is the default superuser
@: Empty password (in production, you'd have a password here!)
localhost: Server location - the database is on the same machine
5432: Port number - PostgreSQL's default port
original_database: Database name - where your data will live
Connection String Anatomy:
postgresql://username:password@hostname:port/database_name
⚠️ Security Note: In production, never leave passwords empty! Use environment variables or secure credential management.
Table Discovery - The Database Explorer
# Get all tables
cursor = sqlite_conn.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
print(f"Found {len(tables)} tables to migrate:")
The Table Hunter Process:
cursor = sqlite_conn.cursor(): Creates a cursor for executing SQL commands on SQLite
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';"): Queries SQLite's system catalog
sqlite_master: SQLite's internal table that stores metadata about your database
WHERE type='table': Filters to only get actual tables (not views, indexes, etc.)
tables = cursor.fetchall(): Retrieves all table names as a list of tuples
print(f"Found {len(tables)} tables to migrate:"): Reports how many tables were discovered
The sqlite_master Table: SQLite's sqlite_master is like a database's table of contents - it contains:
Table definitions
Index information
View definitions
Trigger definitions
Example output:
Found 3 tables to migrate:
The Migration Loop - The Heavy Lifting
# Copy each table
for table in tables:
table_name = table[0]
print(f"Migrating table: {table_name}")
df = pd.read_sql_query(f"SELECT * FROM {table_name}", sqlite_conn)
df.to_sql(table_name, pg_engine, if_exists='replace', index=False, method='multi')
print(f"✓ {table_name} migrated ({len(df)} rows)")
Step 1: Table Name Extraction
table_name = table[0]
print(f"Migrating table: {table_name}")
table[0]: Extracts the table name from the tuple (cursor.fetchall() returns tuples)
Progress reporting: Tells you which table is currently being processed
Step 2: Data Extraction with Pandas
df = pd.read_sql_query(f"SELECT * FROM {table_name}", sqlite_conn)
The Universal Data Translator:
pd.read_sql_query(): Pandas function that executes SQL and returns a DataFrame
f"SELECT * FROM {table_name}": F-string creates dynamic SQL query for each table
sqlite_conn: Uses our SQLite connection to execute the query
Result: df contains all data from the table in a pandas DataFrame
Why pandas? DataFrames are like universal containers that can hold any table structure and convert between different database formats seamlessly!
Step 3: Data Loading to PostgreSQL
df.to_sql(table_name, pg_engine, if_exists='replace', index=False, method='multi')
The Precision Data Insertion:
df.to_sql(): Pandas method that writes DataFrame to a SQL database
table_name: Destination table name (same as source)
pg_engine: Our PostgreSQL connection engine
if_exists='replace': Strategy for handling existing tables
index=False: Don't create an extra index column
method='multi': Use efficient bulk insertion
Parameter Deep Dive:
if_exists='replace' Options:
'fail': Raise error if table exists (default)
'replace': Drop existing table and recreate
'append': Add data to existing table
method='multi' Benefits:
Bulk insertion: Inserts multiple rows in single SQL statement
Performance: Much faster than row-by-row insertion
Efficiency: Reduces database round trips
Step 4: Success Confirmation
print(f"✓ {table_name} migrated ({len(df)} rows)")
len(df): Counts the number of rows successfully migrated
Visual feedback: Checkmark (✓) indicates successful completion
Row count: Confirms data integrity
Example output:
Migrating table: customers
✓ customers migrated (8 rows)
Migrating table: products
✓ products migrated (10 rows)
Migrating table: orders
✓ orders migrated (25 rows)
Migration Completion - The Victory Lap
print("Migration completed!")
The Success Celebration:
Simple but important confirmation that the entire process finished
Gives you confidence that all tables were processed
Resource Cleanup - The Professional Finish
# Close connections
sqlite_conn.close()
pg_engine.dispose()
The Proper Cleanup Protocol:
sqlite_conn.close(): Closes the SQLite connection and releases file locks
pg_engine.dispose(): Properly shuts down the PostgreSQL connection pool
Why cleanup matters:
File locks: SQLite files can't be deleted while connections are open
Connection limits: PostgreSQL has maximum connection limits
Memory management: Prevents resource leaks
Professional practice: Always clean up after yourself!
After doing this, we will see that four tables have been created.


Now, if we right click any option on the left side panel or go to the Tools option in the navigation bar and select 'Query Tool', we can verify whether our data has been successfully loaded or not.

Part 2 is available at the following link: https://www.codersarts.com/post/building-a-database-mcp-server-the-ultimate-code-walkthrough-part-2
Transform Your Projects with Codersarts
Enterprise-grade database servers with advanced security and performance optimization
Multi-database integration systems supporting SQLite, PostgreSQL, MySQL, and cloud databases
AI-powered query optimization and intelligent database management tools
Production-ready deployment with monitoring, logging, and automated backup systems
Don't let complex implementations slow down your innovation. Connect with Codersarts today and turn your ideas into reality!
Ready to get started? Visit Codersarts.com or reach out to our team to discuss your next project. The future of intelligent automation is here – let's build it together!

Yorumlar