Goal
To build a fully autonomous, real-time anomaly detection pipeline that uses Generative AI to analyse critical infrastructure data.
Technology Stack
Pipeline & Architecture (How It Works)
Database creation
Python script to create SQL database (PostgreSQL)
Data Ingestion (PostgreSQL)
Python scripts simulate real-time EV station utilisation data, ingesting it into a PostgreSQL database where it’s tracked in the utilisation and alerts tables.
Connect database with Beekeeper tool N8N Set Up
Orchestration Trigger (n8n)
The system is kicked off on a schedule (simulating a Cron Job) to check for new data.
Anomaly Detection & Filtering
The PostgreSQL node queries the database for all new utilisation events exceeding the 70% threshold. An IF Node validates that at least one alert exists.
Intelligent Message Generation (OpenAI)
The single JSON object is passed to the OpenAI Node. A specific System/User Prompt is used to instruct the LLM to analyse the raw data, prioritise CRITICAL alerts (>95%), and generate alert message and denote severity (🚨, 🔴, 🟡). Notification Deployment
The final, polished message from the LLM is sent immediately to the relevant team via the Gmail API, completing the loop.
Impact & Achievements (The Bottom Line)
These points show the value you bring, framed around efficiency and innovation.
100% Automation: Achieved full autonomy from data ingestion to notification, eliminating the need for manual monitoring or template editing. LLM Integration Success: Demonstrated skill in integrating Generative AI into a core business process, replacing rigid logic with dynamic, natural-language analysis. Enhanced Response Clarity: The system ensures that alerts are highly informative, reducing time required for engineers to understand and action the issue. Robustness & Scalability: Successfully implemented a production-ready PostgreSQL setup and mastered complex Google OAuth 2.0 for reliable external service integration, proving the system is robust and ready to scale.