Skip to content

N8N: Automated EV Utilisation Alerting System

Goal

To build a fully autonomous, real-time anomaly detection pipeline that uses Generative AI to analyse critical infrastructure data.

Technology Stack

Item
Tool
Detail
Data Engineering
PostgreSQL, Python (Pandas, SQLAlchemy)
Building a scalable, production-ready data backbone.
AI / NLP
OpenAI (GPT 4.1 mini)
Advanced Generative AI integration for data analysis and message synthesis.
Automation
n8n
End-to-end process scheduling and execution flow control.
Messaging
Gmail API (OAuth 2.0)
Secure, external API integration, overcoming complex authentication hurdles.
There are no rows in this table

Pipeline & Architecture (How It Works)


Your paragraph text.png

Database creation

Python script to create SQL database (PostgreSQL)
image.png

Data Ingestion (PostgreSQL)

Python scripts simulate real-time EV station utilisation data, ingesting it into a PostgreSQL database where it’s tracked in the utilisation and alerts tables.
Generate data
image.png
Check anomaly
image.png
Connect database with Beekeeper tool
image.png
image.png
Test the SQL query
image.png

N8N Set Up

image.png

Orchestration Trigger (n8n)

The system is kicked off on a schedule (simulating a Cron Job) to check for new data.

Anomaly Detection & Filtering

The PostgreSQL node queries the database for all new utilisation events exceeding the 70% threshold. An IF Node validates that at least one alert exists.

Intelligent Message Generation (OpenAI)

The single JSON object is passed to the OpenAI Node.
A specific System/User Prompt is used to instruct the LLM to analyse the raw data, prioritise CRITICAL alerts (>95%), and generate alert message and denote severity (🚨, 🔴, 🟡).

Notification Deployment

The final, polished message from the LLM is sent immediately to the relevant team via the Gmail API, completing the loop.
image.png
image.png

Impact & Achievements (The Bottom Line)

These points show the value you bring, framed around efficiency and innovation.
100% Automation: Achieved full autonomy from data ingestion to notification, eliminating the need for manual monitoring or template editing.
LLM Integration Success: Demonstrated skill in integrating Generative AI into a core business process, replacing rigid logic with dynamic, natural-language analysis.
Enhanced Response Clarity: The system ensures that alerts are highly informative, reducing time required for engineers to understand and action the issue.
Robustness & Scalability: Successfully implemented a production-ready PostgreSQL setup and mastered complex Google OAuth 2.0 for reliable external service integration, proving the system is robust and ready to scale.

Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.