Building Multilingual NLP Systems with RAG

NLP RAG Multilingual FastAPI FAISS
← Back to Blog

Introduction

Building NLP systems that work across India's linguistic diversity is challenging. With 22 official languages and hundreds of dialects, creating a single question-answering system that serves all users requires careful architecture and multilingual expertise.

In this post, I'll walk through IndicRAG, a production-ready Retrieval-Augmented Generation (RAG) pipeline I built to handle document Q&A across 12+ Indian languages, including Hindi, Bengali, Tamil, Telugu, and more.

The Challenge: Multilingual Document Understanding

Traditional NLP systems face several obstacles in Indian language contexts:

RAG Architecture Overview

Retrieval-Augmented Generation combines the best of information retrieval and generative models:

  1. Document Ingestion: Process and chunk documents with context preservation
  2. Vector Encoding: Convert chunks to embeddings using multilingual models
  3. Semantic Search: Retrieve relevant chunks using FAISS vector similarity
  4. Reranking: Cross-encoder models improve retrieval precision
  5. Generation: Contextualized answer generation with mT5/mBERT

Component 1: Multilingual Document Processing

OCR Integration for Scanned PDFs

Many Indian language documents are scanned images. I integrated dual OCR engines:

from tesseract import image_to_string
from layoutparser import Detectron2LayoutModel

# Multi-stage OCR pipeline
def extract_text_multilingual(image_path, lang='hin+eng'):
    layout = layout_model.detect(image_path)
    text_blocks = []
    for block in layout:
        text = image_to_string(
            block.crop_image(image_path),
            lang=lang,
            config='--psm 6'  # Uniform block of text
        )
        text_blocks.append(text)
    return '\n'.join(text_blocks)

Semantic Chunking

Instead of naive fixed-size chunking, I implemented semantic boundary detection:

Component 2: Multilingual Dense Embeddings

The key to cross-lingual retrieval is using models trained on parallel multilingual data:

Model Selection: LaBSE vs mBERT

I evaluated multiple embedding models:

Winner: LaBSE for its superior cross-lingual retrieval performance despite slightly higher inference cost.

Vector Storage with FAISS

For efficient similarity search over millions of document chunks, I used FAISS (Facebook AI Similarity Search):

import faiss
import numpy as np

# Create FAISS index with inner product similarity
dimension = 768  # LaBSE embedding size
index = faiss.IndexFlatIP(dimension)

# Normalize embeddings for cosine similarity
faiss.normalize_L2(embeddings)
index.add(embeddings)

# Search with GPU acceleration (optional)
gpu_index = faiss.index_cpu_to_gpu(
    faiss.StandardGpuResources(), 0, index
)

Component 3: Cross-Encoder Reranking

Initial retrieval using dense embeddings can miss nuanced matches. A two-stage approach improves precision:

  1. Stage 1 (Fast): FAISS retrieves top 50 candidates (~10ms)
  2. Stage 2 (Accurate): Cross-encoder reranks to top 5 (~100ms)

Why Cross-Encoders?

Unlike bi-encoders (which encode query and document separately), cross-encoders process them jointly, capturing fine-grained interaction signals:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_name = "cross-encoder/mmarco-mMiniLMv2-L12-H384-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
reranker = AutoModelForSequenceClassification.from_pretrained(model_name)

def rerank(query, candidates):
    pairs = [[query, doc] for doc in candidates]
    inputs = tokenizer(pairs, padding=True, truncation=True, 
                      return_tensors='pt', max_length=512)
    scores = reranker(**inputs).logits.squeeze(-1)
    return scores.argsort(descending=True)

Component 4: Contextual Answer Generation

With the most relevant chunks retrieved, I use sequence-to-sequence models for answer generation:

Model: mT5 (Multilingual Text-to-Text Transfer Transformer)

from transformers import MT5ForConditionalGeneration, MT5Tokenizer

model = MT5ForConditionalGeneration.from_pretrained("google/mt5-large")
tokenizer = MT5Tokenizer.from_pretrained("google/mt5-large")

def generate_answer(query, context):
    input_text = f"question: {query} context: {context}"
    inputs = tokenizer(input_text, return_tensors="pt", 
                      max_length=1024, truncation=True)
    
    outputs = model.generate(
        **inputs,
        max_length=256,
        num_beams=4,
        early_stopping=True,
        no_repeat_ngram_size=3
    )
    
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

Deployment: FastAPI Service

For production deployment, I built a REST API with FastAPI:

Key Features

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import asyncio

app = FastAPI(title="IndicRAG API")

class Query(BaseModel):
    question: str
    language: str
    top_k: int = 5

@app.post("/ask")
async def ask_question(query: Query):
    # Retrieve relevant chunks
    chunks = await retrieve_chunks(
        query.question, 
        query.language, 
        top_k=query.top_k
    )
    
    # Rerank
    reranked = await rerank_chunks(query.question, chunks)
    
    # Generate answer
    answer = await generate_answer(
        query.question, 
        reranked[:3]
    )
    
    return {
        "answer": answer,
        "sources": [chunk.metadata for chunk in reranked[:3]],
        "confidence": calculate_confidence(reranked)
    }

Performance Optimizations

1. Model Quantization

Reduced model size by 4x using INT8 quantization with negligible accuracy loss:

2. Batch Processing

Process multiple queries together for better GPU utilization:

# Batch encoding
embeddings = model.encode(
    sentences,
    batch_size=32,
    show_progress_bar=False,
    convert_to_numpy=True
)

3. Approximate Nearest Neighbors

For very large document collections (>10M chunks), use HNSW indexing:

# FAISS HNSW index for fast approximate search
index = faiss.IndexHNSWFlat(dimension, 32)
index.hnsw.efConstruction = 200
index.hnsw.efSearch = 128

Evaluation Metrics

I evaluated IndicRAG on multiple benchmarks:

Real-World Challenges & Solutions

Challenge 1: Code-Mixed Queries

Users often ask "What is property tax in ā¤ŽāĨā¤‚ā¤Ŧ⤈?" (mixing English and Hindi)

Solution: Language detection with fallback to multi-script embedding models

Challenge 2: Domain Terminology

Legal and medical terms often lack good translations

Solution: Custom terminology dictionaries and domain-specific fine-tuning

Challenge 3: Context Window Limitations

Long documents exceed model context limits (512-1024 tokens)

Solution: Hierarchical retrieval with document-level and chunk-level search

Lessons Learned

  1. Language-specific preprocessing matters: Hindi text requires different tokenization than Tamil
  2. Evaluation is hard: Translation-based evaluation misses cultural context
  3. User feedback loops: Implicit feedback (click-through) beats explicit ratings
  4. Fallback strategies: Always have a plan when models fail (e.g., keyword search)

Open Source & GitHub

The complete IndicRAG codebase is available on GitHub:

github.com/DNSdecoded/IndicRAG

Includes:

Conclusion

Building production-grade multilingual NLP systems requires balancing model performance, inference speed, and engineering complexity. IndicRAG demonstrates that with careful architecture and optimization, it's possible to serve accurate, fast question-answering across diverse Indian languages.

The key takeaways:

Questions about multilingual RAG or want to collaborate? Get in touch!

← Back to Blog