← Platform Overview

Edge Intelligence

Edge ML Engine

Run anomaly detection, remaining-useful-life prediction, and efficiency optimization directly on the edge device — sub-50ms inference with zero cloud dependency.

SENSOR INPUTVIB_X2.0 mm/sTEMP67.0°CPRESSURE3.2 barFLOW52 GPMCURRENT12.4 ARPM1780FEATURE TENSORONNX Runtimev1.17vib_x0.500vib_y0.999vib_z0.571temp0.011pressure0.360flow0.969current0.706rpm0.060inferring...MODEL OUTPUTSANOMALY DETECTIONIsolation Forest + Autoencoder0.120OKREMAINING USEFUL LIFEXGBoost + LSTM ensemble47days remainingRMSE: 4.1 hrEFFICIENCY OPTIMIZATIONBEP deviation regressionBEP82.0%pump efficiencyR² = 0.94ACTIONSALERT SERVICENo active alertsDASHBOARDRUL: 47d • Eff: 82.0%CLOUD SYNCMQTT → telemetry + predictionstwinedge@hub-pro:~$ twinedge ml status ML Inference EngineONNX Runtime 1.17 (ARM64 CPU) Loaded Models3 / 20 max Memory Usage124 MB / 256 MB limit [1] anomaly_iso_forest_v3active 48ms 97.2% precision [2] rul_xgboost_bearing_v2active 31ms 4.1hr RMSE [3] eff_regression_pump_v1active 12ms R²=0.94 Last Inferencenow Total Inferences (24h)1,247,392 Avg Latency (24h)34ms

Model Library

Three model families purpose-built for rotating and stationary industrial equipment.

Anomaly Detection

Isolation Forest and Autoencoder models flag abnormal vibration, temperature, or pressure patterns before operators notice.

97.2% accuracy

Precision on pump bearing faults

Remaining Useful Life

XGBoost and LSTM models predict days-to-failure for bearings, seals, and impellers using NASA-CMAPSS-trained baselines.

4.1 hr RMSE

Prediction window accuracy

Efficiency Optimization

Regression models compare real-time BEP deviation against physics curves, recommending VFD setpoints for optimal flow.

R² = 0.94

Efficiency curve fit

Cloud-to-Edge Pipeline

01
Train

AutoML pipeline selects features and architecture from operational data

02
Export

One-click ONNX export with INT8 quantization for edge devices

03
Deploy

Staged rollout via OTA — A/B test against production baseline

Engine Specifications

RuntimeONNX Runtime 1.17 (CPU)
Inference Latency<50ms per prediction (Pi 4)
Batch Throughput500+ inferences/sec
Model FormatsONNX, quantized INT8
Max Loaded Models20 concurrent models
Memory Footprint<256 MB for 10 models
Auto-UpdateCloud-pushed, staged rollout
FallbackPrevious model version on failure

Predict Failures Before They Happen

Deploy ML models to the plant floor in minutes — no data scientist on-site required.