top of page
Codersarts Blog.
What’s new and exciting at Codersarts
Search


Designing and Implementing a Multi-Agent Collaboration Framework
ASSIGNMENT REQUIREMENT DOCUMENT Course: Agent-to-Agent (A2A) — Multi-Agent Systems in Python Student Level: Undergraduate Year 3 / Postgraduate Submission Platform: Moodle (Learning Management System) Individual / Group: Individual Assignments Total Assignments: 2 This document contains the full specifications for Assignment 1 . Read every section carefully before you begin. You will be assessed on the quality of your implementation, the depth of your analysis, and the c
ganesh90
4 days ago10 min read


Designing an Adaptive Chunking Engine for Real-World RAG Systems
Purpose In this assignment, you move beyond isolated chunking techniques to design a complete, adaptive chunking system that intelligently detects document types and selects or combines chunking strategies accordingly. This simulates how chunking is actually deployed in production RAG systems — not as a fixed function, but as a design decision that adapts to input characteristics. Connection to Course Learning Outcomes (CLOs) CLO Description Relevance CLO 1 Identify structu
ganesh90
5 days ago10 min read


Building a Metadata-Aware Ingestion & Retrieval Pipeline
Course: Metadata Filtering Level: Medium → Advanced Type: Individual Assignment Duration: 5–7 days Objective The objective of this assignment is to help you: Understand why metadata filtering is essential for production RAG systems Design a metadata schema for a real-world knowledge base Implement metadata-preserving chunking so that chunk-level metadata is never lost Build and apply pre-filters using ChromaDB's filter syntax Compare pre-filtering vs post-filtering a
ganesh90
5 days ago10 min read


Building a Conversational AI Agent with Memory
Course: LLM Foundational Course Level: Medium → Advanced Type: Individual Assignment Duration: 5–7 days Total Marks: 100 Objective The objective of this assignment is to help you: Implement conversation memory that manages context windows Control LLM output using temperature, max_tokens, and stop sequences Build a complete agent that combines conversation history with semantic search Handle multi-turn conversations with proper context management Track usage and costs
ganesh90
5 days ago7 min read


Token Economics and Semantic Search with Embeddings
Course: LLM Foundational Course Level: Medium Type: Individual Assignment Duration: 5–7 days Total Marks: 100 Objective The objective of this assignment is to help you: Understand tokenization and how it affects API costs Implement token counting and cost calculation functions Build a vector database from scratch using embeddings Perform semantic search to retrieve relevant documents Create a simple RAG system that answers questions using retrieved context Think pra
ganesh90
5 days ago6 min read


Agentic MCP Systems - Design & Security Analysis
Course: MCP Fundamentals Level: Medium → Advanced Type: Individual Assignment Duration: 5–7 days Objective The objective of this assignment is to help you: Understand advanced agentic MCP capabilities (Sampling, Elicitation, Roots) Design multi-agent systems with appropriate orchestration patterns Analyze security implications of agentic workflows Implement human-in-the-loop design patterns Reason about long-running workflows and error handling Think critically about prod
ganesh90
6 days ago8 min read


MCP Server Design & Primitives Selection Challenge
Course: MCP Fundamentals Level: Medium Type: Individual Assignment Duration: 4–5 days Objective The objective of this assignment is to help you: Understand the architectural problem MCP solves and why earlier approaches failed Master the distinction between Tools, Resources, and Prompts Apply primitive selection logic to real-world integration scenarios Design MCP Server architectures with appropriate primitives Analyze trade-offs in transport mechanisms and deployment m
ganesh90
6 days ago4 min read


Designing an Adaptive Chunking Engine for Real-World RAG Systems
Objective In this assignment, you will move beyond isolated chunking techniques and design a complete, adaptive chunking system that intelligently selects or combines strategies based on the input document type. This is closer to how chunking is actually used in production systems. Problem Statement Most tutorials treat chunking strategies independently: Fixed-size chunking Overlapping chunking Sentence-based chunking Token-aware chunking Semantic chunking However, in real-w
ganesh90
6 days ago4 min read


Designing a Production-Ready Chunking Pipeline for RAG
Course: Chunking Strategies for Production RAG Systems Level: Medium → Advanced Type: Individual Assignment Duration: 5–7 days Objective The objective of this assignment is to help you: Understand and implement multiple chunking strategies Analyze trade-offs between different approaches Design a hybrid chunking pipeline Evaluate chunking quality in a Retrieval-Augmented Generation (RAG) context Think like an engineer building production-ready systems Problem Statement You a
ganesh90
6 days ago4 min read


Evaluating Generation Quality and Building an LLM Judge
Course: RAG Evaluation Level: Medium to Advanced Type: Individual Duration: 7 to 10 days Objective This assignment tests your ability to evaluate the generation stage of a RAG pipeline, attribute failures to the correct pipeline stage, and automate the entire evaluation workflow using an LLM as a judge. You will generate RAG answers, measure faithfulness and completeness, run end-to-end error attribution, build a structured LLM judge, and compare automated scores against your
ganesh90
6 days ago7 min read


Building a Golden Dataset and Evaluating Retrieval Quality
Course: RAG Evaluation Level: Beginner to Medium Type: Individual Duration: 5 to 7 days Objective This assignment tests your ability to build the two foundational components of any RAG evaluation workflow: a golden dataset and a retrieval quality report. Without a golden dataset, no evaluation metric has meaning. Without retrieval evaluation, you cannot tell whether failures come from the retrieval stage or the generation stage. By completing this assignment, you will have a
ganesh90
6 days ago6 min read


Multi-Container AI System with Docker Compose and Best Practices
Course: Docker for AI Apps Level: Medium to Advanced Type: Individual Duration: 7 to 10 days Objective This assignment tests your ability to design and operate a multi-container Docker system for an AI application. You will configure container-to-container networking using a user-defined bridge network, orchestrate a multi-service stack with Docker Compose, build and containerize a FastAPI AI REST API with session management and health checks, apply Docker best practices incl
ganesh90
6 days ago7 min read


Dockerizing a Conversational AI App with Persistent Storage
Course: Docker for AI Apps Level: Beginner to Medium Type: Individual Duration: 5 to 7 days Objective This assignment tests your ability to work with Docker's core building blocks: running and inspecting containers, writing a production-ready Dockerfile, containerizing a Python AI application, and persisting data across container restarts using named volumes. By completing this assignment, you will have built and deployed a fully containerized multi-turn AI chatbot that retai
ganesh90
6 days ago6 min read


Building a Complete RAG Search and Answer System
Course: RAG from Scratch Level: Medium to Advanced Type: Individual Duration: 7 to 10 days Objective This assignment tests your ability to build the retrieval and generation stages of a RAG pipeline from scratch. You will implement cosine similarity without external vector search libraries, build a similarity search function, design a grounding-focused prompt template, and assemble a complete end-to-end RAG system that retrieves context and generates accurate, grounded answer
ganesh90
6 days ago5 min read


Building a RAG Knowledge Base Pipeline
Course: RAG from Scratch Level: Beginner to Medium Type: Individual Duration: 5 to 7 days Objective This assignment tests your ability to build the foundational stages of a RAG pipeline: loading documents, extracting clean text, attaching metadata, enriching documents with LLM-generated keywords, and splitting them into retrievable chunks. By completing this assignment, you will have built a reusable knowledge base preparation pipeline that you can apply to any document colle
ganesh90
6 days ago5 min read


Satellite Data Analysis using RAG: AI-Driven Insights for Remote Sensing and Mapping
Introduction Modern satellite constellations generate petabytes of multispectral, hyperspectral, SAR, and LiDAR data every day, far outpacing the capacity of traditional analysis methods. Remote sensing professionals must interpret this imagery against historical baselines, evolving scientific literature, environmental benchmarks, and mission-specific requirements simultaneously. Satellite Data Analysis Systems powered by Retrieval-Augmented Generation (RAG) address this by d
ganesh90
Feb 2717 min read


Loan Underwriting using RAG: Smarter Credit Risk Evaluation with AI Document Intelligence
Introduction Loan underwriting requires the rapid processing of vast financial documents, regulatory guidelines, and market data under tight deadlines, a challenge that rigid scoring models and manual review workflows are ill-equipped to handle. Underwriters must assess creditworthiness, collateral quality, and compliance requirements while keeping pace with constantly shifting lending regulations and economic conditions. Loan Underwriting Systems powered by Retrieval-Augment
ganesh90
Feb 2716 min read


Animal Diagnostic Support using RAG: Bringing Intelligent Clinical Assistance to Veterinary Care
Introduction Veterinary professionals must deliver accurate diagnoses across many species with unique biological differences, while keeping up with constantly evolving research and treatment guidelines. Retrieval Augmented Generation powered diagnostic systems provide real time access to veterinary literature, species specific protocols, diagnostic data, and patient history. By retrieving and synthesizing the most relevant and up to date evidence, these systems deliver contex
ganesh90
Feb 2716 min read


Meet Your Always-On Legal Partner: Building a Real-Time Compliance Portal Agent
The High-Stakes Gamble: Why "Good Enough" Compliance is No Longer Enough In the modern global economy, data and digital operations are the engines of growth. But for the legal and risk teams tasked with managing them, these assets are like enriched uranium : immensely powerful when harnessed correctly, but catastrophic if mishandled. We have moved past the era where compliance was a back-office formality; today, it is the frontline of corporate survival. The Problem: A Labyri

Pratibha
Jan 810 min read


Introduction to Prompt Engineering with Llama 3: Master instruction-tuned conversations and prompting techniques
Introduction Traditional AI interactions require rigid command structures limiting natural communication. Developers struggle to extract optimal responses from language models without specialized knowledge. Manual experimentation with different prompting approaches consumes significant development time. Inconsistent model outputs complicate production deployment and user experience. Llama 3:8B Chat transforms AI interactions through instruction-tuned conversational capabiliti
ganesh90
Dec 23, 202527 min read
bottom of page