1. Adira AI
Purpose:
Helps users draft any legal document - contracts, notices, affidavits, and more instantly, with zero legal formatting errors.
Key Capabilities:
Converts user input into full-length legally valid documents Edits, redlines, and auto-corrects clauses Suggests missing legal sections based on document type Behind the Scenes:
Operates on a legal language model fine-tuned on thousands of Indian contracts Maintains template logic, jurisdictional nuance, and conditional phrasing Smart handling of variable fields like party names, dates, etc. Tech Stack / Tools Used:
LLM APIs (OpenAI GPT-4 / fine-tuned legal LLMs) python-docx, pdfplumber, reportlab for DOCX/PDF output re (regex), SpaCy for clause detection and parsing Internal template management system for dynamic field handling Business Outcomes Achieved:
Reduced average drafting time from 2–3 hours to under 10 minutes Enabled over 5000+ documents auto-generated in alpha phase Achieved 98% accuracy in clause structure and jurisdictional tagging in pilot feedback Enabled law firms to serve 2.5x more clients per day with the same team Optimizations and Their Outcomes:
Replaced static prompts with dynamic templates → improved personalization and formatting accuracy by 40% Batched clause suggestion engine → reduced processing time per document by 25% Added “smart fill-in” placeholders for key terms → reduced manual editing post-generation 2. AI Courtroom
Purpose:
Simulates real courtroom hearings to prepare lawyers with case analysis, counterarguments, and judgment predictions.
Key Capabilities:
Auto-generates opposing counsel arguments Highlights weak points in uploaded petitions or briefs Suggests case laws and potential judge opinions Behind the Scenes:
Multi-layer AI logic for each actor in the courtroom (Lawyer, Opposing Lawyer, Judge) Contextual understanding of petitions and legal issues Generates dynamic legal rebuttals and closing arguments Tech Stack / Tools Used:
LLM API orchestration using memory-based contextual chains PyMuPDF, pdfminer for parsing legal PDFs Text classification using scikit-learn for weak point detection Precedent case fetch logic using indexed search of Indian judgments Business Outcomes Achieved:
Reduced hearing prep time from 7–8 hours to ~15 minutes Simulated over 700+ courtroom rehearsals in pre-release trials Helped litigation teams improve argument completeness by 60%, as measured by missing citations and issue handling Optimizations and Their Outcomes:
Cached key phrase interpretations → reduced repeat processing by 40% Shifted to staged context loading for memory efficiency → enabled longer case briefs to be processed without lag Layered judge logic with precedent lookup → improved judgment prediction relevance by 33% 3. LegalGPT
Purpose:
Acts as an internal knowledge base across products - answering legal questions, summarizing judgments, and interpreting statutes.
Key Capabilities:
Conversational Q&A based on Indian law Automatic case summarization Highlights relevant sections and links laws to facts Behind the Scenes:
Continuously trained on Indian case law, statutes, and regulatory updates Integrated across other products to support context-aware decision-making Tech Stack / Tools Used:
Fine-tuned LLM using Indian legal datasets (case law, statutes, bar council updates) Retrieval-augmented generation (RAG) over indexed legal corpora transformers, sentence-transformers, FAISS for semantic similarity and response ranking Scheduled refresh of law databases via internal ETL sync Business Outcomes Achieved:
Used as a central engine across Adira and Courtroom tools Reduced legal research effort by 70% per query for law firm clients Delivered over 10,000+ responses with domain-aligned legal citations and summaries Increased team-wide response consistency for legal support teams by 50% Optimizations and Their Outcomes:
Fine-tuned answer length control → reduced verbosity and improved response quality Legal topic classifier → improved answer routing speed by 20% Embedding model upgrade → improved semantic hit rate by 35% for user queries 4. Legal ETL Engine
Purpose:
Automates the end-to-end process of collecting, cleaning, and updating Indian Acts into our legal database.
Key Capabilities:
Scrapes Indian laws from trusted government sources (e.g., India Code) Cleans formatting, extracts sections and sub-sections Regularly adds or updates central and state laws in the internal index Behind the Scenes:
Built as an event-driven pipeline using AWS Lambda functions Scheduled extraction, transformation, and load workflows Data normalization ensures uniformity in sectioning, tagging, and clause referencing Automatically flags amendments or new acts for human legal review before indexing Tech Stack / Tools Used:
requests, BeautifulSoup, lxml for scraping pandas for data wrangling re, nltk, spaCy for legal clause structuring and token-level cleanup Business Outcomes Achieved:
Automatically ingests and updates over 200+ Indian Acts and amendments monthly Eliminated 100% of manual legal data entry tasks Improved clause mapping accuracy across products by 40% due to structured legal indexing Enabled enterprise clients to access up-to-date statutes with 0 lag time Optimizations and Their Outcomes:
Migrated to lightweight Lambda triggers → reduced compute cost by 60% Implemented delta-based update checks → minimized redundant downloads and reprocessing Added a semantic flagging layer → improved Act amendment detection accuracy by 45% 5. Lawyer Connect
Purpose:
This system was built to power Claw’s "Lawyer Connect" initiative - a network feature designed to match lawyers with other lawyers (for referrals, collaborations) and clients (for legal services) based on availability, expertise, ratings, and past behavior.
Key Capabilities:
Connects two user types: Clients and Lawyers / Lawyer to Lawyer (Referrals) Recommends best-fit matches based on case category, language, location, and lawyer history Enables dynamic re-ranking based on real-time availability, communication score, and case complexity Supports feedback loop through user interactions to improve future recommendations Behind the Scenes:
The recommendation engine uses a hybrid model combining content-based filtering (for skill and location) and behavioral data (for engagement, completion rate, responsiveness). A modular architecture separates Lawyer-to-Lawyer and Lawyer-to-Client matching logics but uses a common embedding layer for user profiles. Profiles are vectorized using multi-feature embeddings (experience, practice area, ratings, average response time, etc.), and nearest neighbors are identified using cosine similarity. Real-time triggers update the recommendation scores when a lawyer accepts/declines or interacts with a profile. Custom weight tuning was done to prioritize newer lawyers during off-peak times, helping equalize exposure. Tech Stack / Tools Used:
scikit-learn, NumPy for model logic FAISS for fast nearest neighbor search pandas for user profiling and ranking tables Simple in-memory cache layer for recent interactions CSV-based mock data ingestion layer for offline testing Optimizations and Their Outcomes:
Replaced cosine similarity with hybrid score (distance + feedback weighting) → improved match relevance in A/B testing Integrated response-time decay function → improved quality of active lawyer matches Added asynchronous job queue for profile score updates → reduced response latency by 50%