Building a Database MCP Server: The Ultimate Code Walkthrough - Part 3
- ganesh90
- Jul 4
- 20 min read
Updated: Jul 7
Prerequisite: This is a continuation of the blog Part 2: Building a Database MCP Server: The Ultimate Code Walkthrough - Part 2

Meet Your Database Bouncer! 🛡️
def _validate_query(self, query: str, allow_schema_queries: bool = False, allow_delete: bool = False) -> bool:
"""Basic SQL injection prevention with options for schema queries and delete operations"""
dangerous_keywords = [
'drop', 'truncate', 'alter', 'create',
'exec', 'execute', 'sp_', 'xp_', '--', ';--'
]
if not allow_delete:
dangerous_keywords.append('delete')
query_lower = query.lower().strip()
if allow_schema_queries:
schema_patterns = [
r'^pragma\s+table_info\s*\(',
r'^show\s+(tables|columns)',
r'^describe\s+\w+',
r'^select.*information_schema',
r'^select.*sqlite_master'
]
for pattern in schema_patterns:
if re.match(pattern, query_lower):
return True
if query_lower.startswith(('select', 'insert', 'update')):
for keyword in dangerous_keywords:
if keyword in query_lower:
if keyword in ['create', 'alter'] and 'table' not in query_lower:
continue
if keyword in ['delete', 'drop'] and query_lower.startswith('select'):
continue
return False
elif query_lower.startswith('delete') and allow_delete:
for keyword in [k for k in dangerous_keywords if k != 'delete']:
if keyword in query_lower:
return False
if 'where' not in query_lower:
return False
else:
return False
return TrueThis method is like having a professional security guard at a nightclub who knows exactly what to look for and how to handle different situations.
Method Signature Analysis:
def validatequery(self, query: str, allow_schema_queries: bool = False, allow_delete: bool = False) -> bool:
Private method: Underscore prefix indicates internal use
Flexible security: Boolean flags allow different security levels
Clear return: True = safe, False = dangerous
The Dangerous Keywords Blacklist: Think of this as the "do not admit" list at an exclusive venue:
'drop', 'truncate': The data destroyers! 💥
DROP: Deletes entire tables or databases
TRUNCATE: Empties entire tables (faster than DELETE but more dangerous)
'alter', 'create': Structure modifiers that could break your application
ALTER: Changes table structure (could break your app)
CREATE: Creates new database objects (could fill up disk space)
'exec', 'execute': Code execution commands
Can run arbitrary code on the database server
'sp_', 'xp_': SQL Server stored procedures (often system-level access)
sp_: User-defined stored procedures
xp_: Extended procedures (system access)
'--', ';--': Comment syntax used in classic SQL injection attacks
--: SQL comment (used to "comment out" rest of query)
;--: Statement terminator + comment (advanced injection technique)
Dynamic Security Policy:
if not allow_delete::
Conditional danger: DELETE can be explicitly allowed when needed
Default security: DELETE is dangerous by default
Flexible approach: Same system can handle different security requirements
Query Normalization:
query_lower = query.lower().strip():
Case normalization: Handles "DELETE", "delete", "Delete", etc.
Whitespace cleanup: Removes leading/trailing spaces
Consistent processing: Ensures all subsequent checks work reliably
Query Validation - Schema Query Handling
if allow_schema_queries:
schema_patterns = [
r'^pragma\s+table_info\s*\(',
r'^show\s+(tables|columns)',
r'^describe\s+\w+',
r'^select.*information_schema',
r'^select.*sqlite_master'
]
for pattern in schema_patterns:
if re.match(pattern, query_lower):
return TrueThe VIP List for Database Structure Queries! 📊
This section handles the special case where we need to examine database structure – like having a separate entrance for building inspectors.
Schema Pattern Recognition:
r'^pragma\s+table_info\s*\(': SQLite table structure query
r'...': Raw string (backslashes don't need escaping)
^: Must start at beginning of string
\s+: One or more whitespace characters
\s*: Zero or more whitespace characters
Example: "PRAGMA table_info(users)" shows column information
r'^show\s+(tables|columns)': PostgreSQL/MySQL SHOW commands
(tables|columns): Either "tables" or "columns"
SHOW TABLES: Lists all tables in database
SHOW COLUMNS: Lists columns in a table
r'^describe\s+\w+': DESCRIBE command (MySQL/PostgreSQL)
\w+: One or more word characters (table name)
Alternative syntax: Different way to get table structure
r'^select.*information_schema': Standard SQL metadata queries
.*: Any characters between SELECT and information_schema
information_schema: Standard database containing metadata
Industry standard: Works across different database types
r'^select.*sqlite_master': SQLite system table queries
sqlite_master: SQLite's system table containing schema info
SQLite specific: SQLite's version of information_schema
Pattern Matching Loop:
for pattern in schema_patterns:: Check each approved pattern
if re.match(pattern, query_lower):: Test if query matches pattern
return True: Immediately approve if any pattern matches
Main Query Validation Logic
if query_lower.startswith(('select', 'insert', 'update')):
for keyword in dangerous_keywords:
if keyword in query_lower:
if keyword in ['create', 'alter'] and 'table' not in query_lower:
continue
if keyword in ['delete', 'drop'] and query_lower.startswith('select'):
continue
return False
elif query_lower.startswith('delete') and allow_delete:
for keyword in [k for k in dangerous_keywords if k != 'delete']:
if keyword in query_lower:
return False
if 'where' not in query_lower:
return False
else:
return False
return TrueThe Intelligent Security Engine! 🧠
This section implements sophisticated, context-aware security that goes beyond simple keyword blocking.
Safe Operation Categories:
query_lower.startswith(('select', 'insert', 'update')):
Tuple check: Checks against multiple safe operations at once
SELECT: Data retrieval (generally safe)
INSERT: Data addition (controlled modification)
UPDATE: Data modification (controlled changes)
Context-Aware Exception Handling:
if keyword in ['create', 'alter'] and 'table' not in query_lower::
Nuanced security: CREATE VIEW or ALTER SESSION might be acceptable
Table-specific blocking: Only dangerous when affecting table structure
continue: Skip this keyword, check others
if keyword in ['delete', 'drop'] and query_lower.startswith('select')::
Reading vs. Doing: SELECT can mention dangerous words safely
Example: SELECT * FROM delete_log (reading about deletion, not doing it)
Context matters: Same word can be safe or dangerous
DELETE Operation Special Handling:
elif query_lower.startswith('delete') and allow_delete::
Explicit permission: DELETE must be explicitly allowed
Special security: DELETE operations get extra scrutiny
for keyword in [k for k in dangerous_keywords if k != 'delete']::
List comprehension: Creates new list excluding 'delete' itself
Focused scan: Looks for OTHER dangerous keywords in DELETE queries
No free pass: DELETE queries still checked for injection attempts
if 'where' not in query_lower::
Critical safety: DELETE without WHERE deletes EVERYTHING!
Data protection: Prevents catastrophic data loss
Required specificity: Forces user to specify what to delete
Default Deny Principle:
else: return False:
Whitelist approach: Only explicitly allowed operations pass
Security by default: Anything not on the approved list gets rejected
Fail safe: When in doubt, block it
Final Approval:
return True:
Passed all tests: Query survived all security checks
Safe to execute: Has been thoroughly validated
Query Execution Engine
async def execute_query(self, query: str, params: tuple = None,
fetch_all: bool = True, limit: int = None,
allow_schema_queries: bool = False, allow_delete: bool = False) -> Dict[str, Any]:
"""Execute a parameterized query safely"""
if not self._validate_query(query, allow_schema_queries, allow_delete):
raise ValueError("Query contains potentially dangerous operations or missing WHERE clause for DELETE")
try:
cursor = self.connection.cursor()
if limit and not allow_schema_queries and not query.strip().lower().startswith('delete'):
limit = min(limit, self.max_results)
if not re.search(r'\blimit\b', query, re.IGNORECASE):
query += f" LIMIT {limit}"
if params:
cursor.execute(query, params)
else:
cursor.execute(query)
result = {
'success': True,
'query': query,
'rowcount': cursor.rowcount
}
if query.strip().lower().startswith(('select', 'pragma', 'show', 'describe')):
if fetch_all:
rows = cursor.fetchall()
result['data'] = [dict(row) for row in rows] if rows else []
result['count'] = len(result['data'])
else:
row = cursor.fetchone()
result['data'] = dict(row) if row else None
result['count'] = 1 if row else 0
else:
if self.db_type == "sqlite":
self.connection.commit()
result['affected_rows'] = cursor.rowcount
if query.strip().lower().startswith('insert'):
result['last_insert_id'] = cursor.lastrowid
return result
except Exception as e:
logger.error(f"Query execution error: {e}")
return {
'success': False,
'error': str(e),
'query': query
}The Heart of Database Operations! ⚙️
This is the sophisticated engine that safely executes queries with multiple layers of protection and intelligent result handling.
Method Signature Deep Dive:
async def execute_query(self, query: str, params: tuple = None, fetch_all: bool = True, limit: int = None, allow_schema_queries: bool = False, allow_delete: bool = False) -> Dict[str, Any]:
Comprehensive parameters: Handles every type of database operation
Type hints: Clear documentation of what each parameter expects
Return type: Always returns a dictionary with consistent structure
Security First Validation:
if not self._validate_query(query, allow_schema_queries, allow_delete):
Mandatory security: Every query must pass validation first
No exceptions: Security cannot be bypassed
Clear error: Descriptive message about what went wrong
Smart LIMIT Injection System:
if limit and not allow_schema_queries and not query.strip().lower().startswith('delete'):
limit = min(limit, self.max_results)
if not re.search(r'\blimit\b', query, re.IGNORECASE):
query += f" LIMIT {limit}"LIMIT Logic Breakdown:
if limit: Only if a limit was requested
not allow_schema_queries: Don't limit schema queries (they're usually small)
not query.strip().lower().startswith('delete'): Don't limit DELETE operations
limit = min(limit, self.max_results): Never exceed the safety maximum
re.search(r'\blimit\b', query, re.IGNORECASE): Check if query already has LIMIT
\b: Word boundary (prevents matching "unlimited")
re.IGNORECASE: Case-insensitive search
query += f" LIMIT {limit}": Append LIMIT clause if needed
Parameterized Query Execution:
if params:
cursor.execute(query, params)
else:
cursor.execute(query)Conditional execution: Different paths for parameterized vs. simple queries
Example parameterized: SELECT * FROM users WHERE age > ? with params (25,)
Comprehensive Result Building:
result = {
'success': True,
'query': query,
'rowcount': cursor.rowcount
}Success indicator: Clear status for calling code
Query echo: Shows what was actually executed (useful for debugging)
Row count: Database-provided count of affected rows
Query Type Intelligence:
if query.strip().lower().startswith(('select', 'pragma', 'show', 'describe')):Data Retrieval Queries Handling:
Multiple query types: SELECT, PRAGMA, SHOW, DESCRIBE all return data
Flexible fetching: Can get all results or just first one
Data conversion: Converts database rows to Python dictionaries
Fetch All vs. Fetch One:
if fetch_all:
rows = cursor.fetchall()
result['data'] = [dict(row) for row in rows] if rows else []
result['count'] = len(result['data'])
else:
row = cursor.fetchone()
result['data'] = dict(row) if row else None
result['count'] = 1 if row else 0fetch_all=True: Gets all matching records as a list
fetch_all=False: Gets only the first record
Safe conversion: [dict(row) for row in rows] converts each row to dictionary
Null handling: Handles empty results gracefully
Data Modification Queries Handling:
else:
if self.db_type == "sqlite":
self.connection.commit()
result['affected_rows'] = cursor.rowcount
if query.strip().lower().startswith('insert'):
result['last_insert_id'] = cursor.lastrowidSQLite commits: SQLite needs explicit commits for changes
PostgreSQL auto-commit: Already configured for automatic commits
Affected rows: How many records were changed
Last insert ID: Auto-generated ID for new records (useful for follow-up operations)
Error Handling Excellence:
except Exception as e:
logger.error(f"Query execution error: {e}")
return {
'success': False,
'error': str(e),
'query': query
}Comprehensive catch: Handles all possible database errors
Detailed logging: Records the error for debugging
Structured response: Returns consistent error format
Context preservation: Includes the query that failed
Connection Pool Management System
# Global connection pool
connection_pool = {}
async def get_database_manager(db_path: str, db_type: str = "sqlite",
connection_string: str = None) -> DatabaseManager:
"""Get or create a database manager for the specified path"""
# Create a unique key for the connection
connection_key = f"{db_type}:{db_path}"
if connection_key in connection_pool:
manager = connection_pool[connection_key]
if manager.is_connected():
return manager
else:
# Remove dead connection
manager.close()
del connection_pool[connection_key]
# Create new connection
manager = DatabaseManager(db_type, db_path, connection_string)
await manager.initialize()
connection_pool[connection_key] = manager
return manager
def is_sample_database(db_path: str) -> bool:
"""Determine if a path points to the sample database"""
return os.path.abspath(db_path) == os.path.abspath(SAMPLE_DB_PATH)Global Connection Pool:
connection_pool = {}: Global dictionary storing all active connections
Shared resource: All functions can access this pool
Memory efficient: Reuses connections instead of creating new ones
Smart Key Generation:
connection_key = f"{db_type}:{db_path}": Creates unique identifiers
Examples:
"sqlite:/path/to/database1.db"
"postgresql:enterprise_db"
"sqlite:/another/project/data.db"
Uniqueness: Each database gets its own connection slot
Connection Reuse Logic:
if connection_key in connection_pool:
manager = connection_pool[connection_key]
if manager.is_connected():
return manager
else:
# Remove dead connection
manager.close()
del connection_pool[connection_key]First check: Does a connection already exist?
Health verification: Is the existing connection still working?
Reuse success: Return the working connection (fast path)
Cleanup failure: Remove dead connections from pool
New Connection Creation:
manager = DatabaseManager(db_type, db_path, connection_string)
await manager.initialize()Fresh instance: Create new DatabaseManager
Async initialization: Set up the connection properly. await waits for something to finish (like connecting to a database or creating a table), but without blocking the entire program.
Error handling: Initialize method handles failures
Sample Database Auto-Setup:
if is_sample_database(db_path) and db_type == "sqlite":
await create_sample_tables(manager)Automatic convenience: Sample database gets tables and data automatically
Learning friendly: Instant functionality for beginners
Conditional: Only applies to the specific sample database
Sample Database Detection:
def is_sample_database(db_path: str) -> bool:
return os.path.abspath(db_path) == os.path.abspath(SAMPLE_DB_PATH)Absolute path comparison: Handles relative paths correctly
Cross-platform: Works on Windows, Mac, Linux
Exact matching: Prevents accidental auto-setup on wrong databases
Connection Pool Benefits:
Performance: Reusing connections is 10-100x faster than creating new ones
Resource efficiency: Lower memory usage and fewer total connections
Reliability: Automatic cleanup of dead connections
Scalability: Can handle many simultaneous database operations
MCP Server Initialization and Tools
# Initialize MCP Server
mcp = FastMCP("Database Query MCP Server")
@mcp.tool()
async def query_database(
db_path: str,
query: str,
params: Optional[List] = None,
limit: int = 100,
db_type: str = "sqlite",
connection_string: Optional[str] = None
) -> str:
"""
Execute a SELECT query on any database by specifying the path
Args:
db_path: Path to the database file (e.g., './data/sample_database.db', '/path/to/your.db')
query: SQL SELECT query to execute
params: Optional parameters for the query
limit: Maximum number of results to return
db_type: Database type ('sqlite' or 'postgresql')
connection_string: Connection string for PostgreSQL
"""The MCP Server Foundation! 🚀
This section creates the actual server and defines the tools that users and AI systems can call.
Server Creation:
mcp = FastMCP("Database Query MCP Server"):
FastMCP: The framework that handles all the MCP protocol details
Server name: Descriptive name for identification
Protocol handling: Manages communication with AI systems
Tool Decorator Pattern:
@mcp.tool():
Decorator magic: Registers the function as an available tool
Automatic discovery: MCP server can list all available tools
Protocol integration: Handles parameter passing and response formatting
Comprehensive Documentation: The docstring serves multiple crucial purposes:
Human documentation: Helps developers understand the tool
AI guidance: Helps AI systems understand how to use the tool
Parameter examples: Shows exactly what format to use
Type information: Clarifies what each parameter expects
Now let's examine the complete query tool implementation:
try:
# Validate it's a SELECT query
if not query.strip().lower().startswith('select'):
return json.dumps({
'status': 'error',
'error': 'Only SELECT queries are allowed with this tool'
}, indent=2)
db = await get_database_manager(db_path, db_type, connection_string)
query_params = tuple(params) if params else None
result = await db.execute_query(query, query_params, limit=limit)
if result['success']:
return json.dumps({
'status': 'success',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'count': result['count'],
'data': result['data']
}, indent=2, default=str)
else:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': result['error']
}, indent=2)
except Exception as e:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': str(e)
}, indent=2)Query Tool Implementation Deep Dive:
Read-Only Validation:
if not query.strip().lower().startswith('select')::
Safety first: This tool only allows data reading, not modification
Clear error message: Tells user exactly why their query was rejected
Consistent formatting: All responses are properly formatted JSON
Database Manager Acquisition:
db = await get_database_manager(db_path, db_type, connection_string):
Connection pooling: Uses our smart connection management system
Automatic setup: May create new connection or reuse existing one
Type flexibility: Handles both SQLite and PostgreSQL
Parameter Processing:
query_params = tuple(params) if params else None:
Safe conversion: Converts list to tuple format expected by database
Null handling: Gracefully handles case where no parameters provided
Query Execution:
result = await db.execute_query(query, query_params, limit=limit):
Comprehensive execution: Uses our sophisticated query engine
Safety limits: Respects the limit parameter for performance
Security validation: All security checks happen automatically
Success Response Construction:
return json.dumps({
'status': 'success',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'count': result['count'],
'data': result['data']
}, indent=2, default=str)Structured response: Consistent format for all successful operations
Context information: Includes which database was used
Sample database flag: Helps users know if they're working with test data
Result metadata: Count and actual data returned
JSON formatting: indent=2 makes output human-readable
Type handling: default=str handles dates and other non-JSON types
Results and Output: Watching the Magic in Action
When we fire up this database server, we'll witness something truly spectacular! Let's look at the real results from our comprehensive testing session:
Sample Database Creation Magic ✨
The moment you first connect to the sample database, the server springs into action:
{
"status": "success",
"database_path": "./data/sample_database.db",
"is_sample_database": true,
"tables": [
{"name": "users"},
{"name": "products"},
{"name": "orders"}
]
}What just happened? our server automatically detected this was the sample database and created a complete e-commerce schema with realistic test data! No setup required – instant functionality for learning and testing.
Real Query Results in Action
User Data Query:
SELECT * FROM users WHERE age > 30Result:
{
"status": "success",
"database_path": "./data/sample_database.db",
"is_sample_database": true,
"count": 3,
"data": [
{
"id": 1,
"name": "John Doe",
"email": "john@example.com",
"age": 30,
"created_at": "2025-06-10 08:05:51"
},
{
"id": 3,
"name": "Bob Johnson",
"email": "bob@example.com",
"age": 35,
"created_at": "2025-06-10 08:05:51"
},
{
"id": 5,
"name": "Charlie Wilson",
"email": "charlie@example.com",
"age": 42,
"created_at": "2025-06-10 08:05:51"
}
]
}
Multi-Database Juggling Act 🤹
One of the most interesting features we'll see:
Database Connection Status
Total Active Connections: 2
Connection 1: Sample Database
✅ Path: E:\workspace\python\mcp\database-query\data\sample_database.db
✅ Type: SQLite
✅ Status: Connected & Ready
Connection 2: Original Database
✅ Path: E:\workspace\python\mcp\database-query\data\original_database.db
✅ Type: SQLite
✅ Status: Connected & ReadyThis is like being a DJ with multiple turntables – seamlessly mixing between different music collections (databases) without missing a beat! 🎧
SQL Injection Attempt (Blocked):
Query Attempted: SELECT * FROM users; DROP TABLE users;
Security Result: ❌ "You can only execute one statement at a time."
Database Status: ✅ Completely safe and intact!Things to Watch Out For: Pro Tips and Common Gotchas
1. The "Missing Database File" Adventure 🗺️
What happens:
{
"status": "error",
"error": "Database file does not exist: ./my_awesome_database.db"
}Why it happens: You're trying to connect to a database that doesn't exist yet (like asking for directions to a house that hasn't been built!)
The fix:
# Use the sample database for testing
db_path = "./data/sample_database.db"
# Or create a new database first
import sqlite3
conn = sqlite3.connect("./my_new_database.db")
conn.close()
Pro Tip: 💡 The sample database gets created automatically - it's your training wheels while learning the system!
2. The "WHERE Clause Requirement" Protection 🎯
What happens:
{
"status": "error",
"error": "DELETE operations require a WHERE clause for safety."
}Why it's critical: Prevents DELETE FROM users (which deletes ALL users!) by requiring DELETE FROM users WHERE condition.
Safe DELETE examples:
# ✅ Deletes specific user
delete_from_database("my_db.db", "users", "id = 5", confirm_delete=True)
Safe DELETE examples:
# ✅ Deletes specific user
delete_from_database("my_db.db", "users", "id = 5", confirm_delete=True)
# ✅ Deletes inactive users
delete_from_database("my_db.db", "users", "last_login < '2023-01-01'", confirm_delete=True)
# ✅ Deletes test data
delete_from_database("my_db.db", "orders", "status = 'test'", confirm_delete=True)
Now let's continue exploring the remaining unexplained parts of our amazing database system!
The Data Modification Specialist - insert_into_database
@mcp.tool()
async def insert_into_database(
db_path: str,
table: str,
data: Dict[str, Any],
db_type: str = "sqlite",
connection_string: Optional[str] = None
) -> str:
"""
Insert a record into any database by specifying the path
Args:
db_path: Path to the database file
table: Table name to insert into
data: Dictionary of column:value pairs to insert
db_type: Database type ('sqlite' or 'postgresql')
connection_string: Connection string for PostgreSQL
"""The Smart INSERT Builder! 🏗️
This tool is like having a personal assistant who knows exactly how to add new records to any table:
try:
db = await get_database_manager(db_path, db_type, connection_string)
# Build INSERT query
columns = ', '.join(data.keys())
placeholders = ', '.join(['?' for _ in data])
query = f"INSERT INTO {table} ({columns}) VALUES ({placeholders})"
params = tuple(data.values())
result = await db.execute_query(query, params)
The Query Builder Magic:
columns = ', '.join(data.keys()): Creates "name, email, age" from your dictionary keys
placeholders = ', '.join(['?' for _ in data]): Creates "?, ?, ?" for safe parameterization
query = f"INSERT INTO {table} ({columns}) VALUES ({placeholders})": Builds "INSERT INTO users (name, email, age) VALUES (?, ?, ?)"
params = tuple(data.values()): Converts your values to a safe tuple format
It's like having a smart form that automatically formats your information for any database table, no matter what fields it has!
The Precision Updater - update_database
@mcp.tool()
async def update_database(
db_path: str,
table: str,
data: Dict[str, Any],
where_clause: str,
where_params: Optional[List] = None,
db_type: str = "sqlite",
connection_string: Optional[str] = None
) -> str:
"""
Update records in any database by specifying the path
"""The UPDATE Query Architect! ⚡
try:
db = await get_database_manager(db_path, db_type, connection_string)
# Build UPDATE query
set_clause = ', '.join([f"{col} = ?" for col in data.keys()])
query = f"UPDATE {table} SET {set_clause} WHERE {where_clause}"
# Combine data values and where parameters
params = list(data.values())
if where_params:
params.extend(where_params)
result = await db.execute_query(query, tuple(params))The Smart Parameter Merging:
set_clause = ', '.join([f"{col} = ?" for col in data.keys()]): Creates "name = ?, age = ?"
params = list(data.values()): Gets your update values
params.extend(where_params): Adds WHERE clause parameters
Final query: "UPDATE users SET name = ?, age = ? WHERE id = ?" with params ["John", 30, 5]
Pro Tip: 💡 The system cleverly merges your update data with your WHERE parameters in the correct order - no parameter confusion!
The Cautious Destroyer - delete_from_database
@mcp.tool()
async def delete_from_database(
db_path: str,
table: str,
where_clause: str,
where_params: Optional[List] = None,
confirm_delete: bool = False,
db_type: str = "sqlite",
connection_string: Optional[str] = None
) -> str:
"""
Delete records from any database by specifying the path
Args:
db_path: Path to the database file
table: Table name to delete from
where_clause: WHERE clause (without WHERE keyword)
where_params: Parameters for the WHERE clause
confirm_delete: Must be True to proceed with deletion
db_type: Database type ('sqlite' or 'postgresql')
connection_string: Connection string for PostgreSQL
"""The Safety-Conscious Function! 🛡️
This function is like having a nuclear launch system - multiple fail-safes, confirmations, and safety checks before anything destructive happens.
Safety Check #1: The Confirmation Gate
if not confirm_delete:
return json.dumps({
"status": "error",
"database_path": db_path,
"error": "Delete operation requires explicit confirmation. Set confirm_delete=True to proceed."
}, indent=2)The Explicit Consent Mechanism:
if not confirm_delete:: Checks if user explicitly set confirm_delete=True
Default is False: You cannot accidentally delete - it requires intentional action
Clear error message: Tells you exactly what you need to do to proceed
Immediate return: Stops execution completely without touching the database
Like a bank requiring your signature AND a witness for large withdrawals!
Safety Check #2: The WHERE Clause Enforcer
if not where_clause or where_clause.strip() == "":
return json.dumps({
"status": "error",
"database_path": db_path,
"error": "DELETE operations require a WHERE clause for safety."
}, indent=2)The "Don't Nuke Everything" Protection:
if not where_clause: Checks if WHERE clause was provided at all
where_clause.strip() == "": Catches empty strings or just whitespace
Prevents DELETE FROM table: This would delete ALL records (disaster!)
Forces specificity: You must specify WHAT to delete
Why this matters: DELETE FROM users deletes EVERY user in your system. This guard prevents that catastrophe!
Safety Check #3: The Database Connection Verification
try:
db = await get_database_manager(db_path, db_type, connection_string)The "Is The Database Even There?" Check:
Connection validation: Ensures database exists and is accessible
Exception handling: If database doesn't exist, the try block will catch it
Resource management: Uses our smart connection pooling system
Safety Check #4: The "Preview Mode" - Count Before You Destroy
# Build DELETE query
query = f"DELETE FROM {table} WHERE {where_clause}"
params = tuple(where_params) if where_params else None
# First, count records that would be affected
count_query = f"SELECT COUNT(*) as count FROM {table} WHERE {where_clause}"
count_result = await db.execute_query(count_query, params, fetch_all=False)
if not count_result['success']:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': f"Error checking record count: {count_result['error']}"
}, indent=2)
records_to_delete = count_result['data']['count']The "Look Before You Leap" System:
count_query = f"SELECT COUNT(*) as count FROM {table} WHERE {where_clause}": Creates a safe counting query using the SAME WHERE clause
params = tuple(where_params) if where_params else None: Safely handles parameters (could be None)
fetch_all=False: Gets only the count result (single value)
Error checking: Verifies the count query worked before proceeding
records_to_delete = count_result['data']['count']: Extracts the exact number of records that would be deleted
Why count first? It's like asking "How many files will be deleted?" before emptying the trash!
Safety Check #5: The "Nothing to Delete" Early Exit
if records_to_delete == 0:
return json.dumps({
'status': 'warning',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'message': 'No records match the WHERE clause criteria',
'affected_rows': 0
}, indent=2)The "Oops, No Matches" Handler:
if records_to_delete == 0:: Checks if the WHERE clause matches anything
Status 'warning': Not an error, but worth noting
Informative response: Tells you why nothing was deleted
'affected_rows': 0: Makes it clear no data was changed
Why this matters: Prevents confusion when your WHERE clause doesn't match any records.
The Actual Deletion - Finally!
# Execute the DELETE
result = await db.execute_query(query, params, allow_delete=True)
if result['success']:
return json.dumps({
'status': 'success',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'message': f'Records deleted from {table}',
'affected_rows': result['affected_rows'],
'records_deleted': records_to_delete
}, indent=2)
else:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': result['error']
}, indent=2)The Moment of Truth:
allow_delete=True: Special flag telling the query engine this DELETE is intentional
query: The actual DELETE statement built earlier
params: Safe parameterized values
Success response includes:
'affected_rows': How many records the database actually deleted
'records_deleted': Our pre-counted number (should match!)
Clear success message: Confirms what happened
if result['success']:
return json.dumps({
'status': 'success',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'message': f'Records deleted from {table}',
'affected_rows': result['affected_rows'],
'records_deleted': records_to_delete
}, indent=2)This block runs if the SQL query executed successfully.
It returns a JSON-formatted string with:
"status": "success" → indicates success
"database_path" → the path to the database file
"is_sample_database" → checks if this is the sample database using the is_sample_database() function
"tables" → includes the list of tables returned in result['data']
indent=2 makes the JSON readable.
default=str ensures any non-standard types (like datetime) are converted to string.
else:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': result['error']
}, indent=2)This block runs if the SQL query failed (i.e., result['success'] is False).
It returns a JSON string with:
"status": "error" → indicates failure
"database_path" → same database path
"error" → the actual error message from result['error']
Helps identify what went wrong during query execution.
The Final Safety Net - Exception Handling
except Exception as e:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': str(e)
}, indent=2)The "Something Went Wrong" Handler:
Catches ANY exception: Database errors, connection issues, etc.
Graceful failure: Returns structured JSON instead of crashing
Error context: Includes database path for debugging
String conversion: str(e) ensures error message is JSON-serializable
The Six-Layer Safety System Summary:
🔐 Confirmation Gate: Must explicitly set confirm_delete=True
📝 WHERE Clause Enforcer: Cannot delete without specificity
💾 Database Connection: Verifies database exists and is accessible
🔍 Preview Mode: Counts records before deletion
⚠️ Zero-Match Handler: Gracefully handles "nothing to delete" scenarios
🛡️ Exception Safety: Catches and handles all possible errors
Why This Level of Paranoia?
Database disasters are permanent and devastating:
Lost customer data = Lost business + Legal liability
Deleted financial records = Audit failures + Compliance violations
Removed user accounts = Customer service nightmares
Wiped product catalogs = Website downtime + Revenue loss
Professional DELETE Best Practices:
✅ Best Practice | Example | Notes |
Test your WHERE clause first | query_database("shop.db", "SELECT * FROM users WHERE age < 18") | Preview which records will be affected |
Use specific conditions | delete_from_database("shop.db", "users", "id = 12345", confirm_delete=True) | Target a single user or row |
Use date ranges for data cleanup | delete_from_database("logs.db", "access_log", "created_at < '2023-01-01'", confirm_delete=True) | Useful for archiving or cleaning old data |
Always include confirm_delete=True | delete_from_database("test.db", "temp_data", "session_id = ?", ["abc123"], confirm_delete=True) | Prevents accidental deletions |
The delete_from_database function represents the gold standard for safe data deletion - multiple validation layers, preview capabilities, and foolproof confirmation systems that protect your valuable data from accidents! 🏆
The Database Explorer - get_database_tables
@mcp.tool()
async def get_database_tables(db_path: str, db_type: str = "sqlite",
connection_string: Optional[str] = None) -> str:
"""
Get list of tables in any database by specifying the path
Args:
db_path: Path to the database file
db_type: Database type ('sqlite' or 'postgresql')
connection_string: Connection string for PostgreSQL
"""
try:
db = await get_database_manager(db_path, db_type, connection_string)
if db_type == "sqlite":
query = "SELECT name FROM sqlite_master WHERE type='table'"
else: # PostgreSQL
query = """
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'
"""
result = await db.execute_query(query, allow_schema_queries=True)
if result['success']:
return json.dumps({
'status': 'success',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'tables': result['data']
}, indent=2, default=str)
else:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': result['error']
}, indent=2)
except Exception as e:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': str(e)
}, indent=2)The Database X-Ray Vision! 👀
try:
db = await get_database_manager(db_path, db_type, connection_string)
if db_type == "sqlite":
query = "SELECT name FROM sqlite_master WHERE type='table'"
else: # PostgreSQL
query = """
SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'
"""
result = await db.execute_query(query, allow_schema_queries=True)Database-Specific Magic:
SQLite: Uses sqlite_master (SQLite's internal catalog table)
PostgreSQL: Uses information_schema.tables (SQL standard metadata)
allow_schema_queries=True: Special flag that bypasses normal security restrictions for metadata queries
Why different queries? It's like asking for directions in different languages - same goal, different syntax for each database type!
The Table Surgeon - get_table_schema
@mcp.tool()
async def get_table_schema(db_path: str, table_name: str, db_type: str = "sqlite",
connection_string: Optional[str] = None) -> str:
"""
Get schema/structure of a table in any database by specifying the path
Args:
db_path: Path to the database file
table_name: Name of the table to describe
db_type: Database type ('sqlite' or 'postgresql')
connection_string: Connection string for PostgreSQL
"""
try:
db = await get_database_manager(db_path, db_type, connection_string)
if db_type == "sqlite":
query = f"PRAGMA table_info({table_name})"
else: # PostgreSQL
query = """
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_name = ?
ORDER BY ordinal_position
"""
params = (table_name,) if db_type == "postgresql" else None
result = await db.execute_query(query, params, allow_schema_queries=True)
if result['success']:
return json.dumps({
'status': 'success',
'database_path': db_path,
'is_sample_database': is_sample_database(db_path),
'table': table_name,
'schema': result['data']
}, indent=2, default=str)
else:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': result['error']
}, indent=2)
except Exception as e:
return json.dumps({
'status': 'error',
'database_path': db_path,
'error': str(e)
}, indent=2)The Table Anatomy Expert! 🔬
It reveals the complete internal structure, constraints, and metadata.
Step 1: Database Connection Setup
try:
db = await get_database_manager(db_path, db_type, connection_string)The Foundation Layer:
Connection reuse: Uses our smart connection pooling system
Multi-database support: Works with any database path
Error handling: The try block catches connection failures
Step 2: Database-Specific Query Construction
if db_type == "sqlite":
query = f"PRAGMA table_info({table_name})"
else: # PostgreSQL
query = """
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_name = ?
ORDER BY ordinal_position
"""SQLite PRAGMA Deep Dive:
PRAGMA table_info(table_name): SQLite's special administrative command
Why f-string? PRAGMA commands don't support parameterized queries
What it returns: Column ID, name, type, not null, default value, primary key flag
Direct table reference: Table name is embedded directly in the command
PostgreSQL Standard SQL:
information_schema.columns: SQL standard system catalog
Parameterized safety: Uses ? placeholder for table name
Standard columns: column_name, data_type, is_nullable, column_default
ORDER BY ordinal_position: Returns columns in their defined order
Cross-database compatibility: Works on any SQL-compliant database
Step 3: Parameter Handling Logic
params = (table_name,) if db_type == "postgresql" else NoneThe Smart Parameter Assignment:
PostgreSQL: (table_name,) creates a single-element tuple for the ? placeholder
SQLite: None because PRAGMA doesn't support parameters
Tuple syntax: (table_name,) with comma is crucial - (table_name) would just be a string!
Step 4: Safe Schema Query Execution
result = await db.execute_query(query, params, allow_schema_queries=True)The Special Permission Flag:
allow_schema_queries=True: Bypasses normal security restrictions
Why needed? Schema queries often use keywords like "SELECT" with system tables
Security exception: These queries are read-only and safe for metadata access
Real-World Schema Output Example:
SQLite Result:
{
"status": "success",
"table": "users",
"schema": [
{
"cid": 0,
"name": "id",
"type": "INTEGER",
"notnull": 0,
"dflt_value": null,
"pk": 1
},
{
"cid": 1,
"name": "name",
"type": "VARCHAR(100)",
"notnull": 1,
"dflt_value": null,
"pk": 0
},
{
"cid": 2,
"name": "email",
"type": "VARCHAR(100)",
"notnull": 1,
"dflt_value": null,
"pk": 0
}
]
}PostgreSQL Result:
{
"status": "success",
"table": "users",
"schema": [
{
"column_name": "id",
"data_type": "integer",
"is_nullable": "NO",
"column_default": "nextval('users_id_seq'::regclass)"
},
{
"column_name": "name",
"data_type": "character varying",
"is_nullable": "NO",
"column_default": null
},
{
"column_name": "email",
"data_type": "character varying",
"is_nullable": "NO",
"column_default": null
}
]
}Pro Tip: 💡 PRAGMA commands are SQLite's special administrative language - they're like having a direct conversation with the database engine!
Part 4 is available at the following link: https://www.codersarts.com/post/building-a-database-mcp-server-the-ultimate-code-walkthrough-part-4
Transform Your Projects with Codersarts
Enterprise-grade database servers with advanced security and performance optimization
Multi-database integration systems supporting SQLite, PostgreSQL, MySQL, and cloud databases
AI-powered query optimization and intelligent database management tools
Production-ready deployment with monitoring, logging, and automated backup systems
Don't let complex implementations slow down your innovation. Connect with Codersarts today and turn your ideas into reality!
Ready to get started? Visit Codersarts.com or reach out to our team to discuss your next project. The future of intelligent automation is here – let's build it together!




Comments