← Platform Overview

Cloud Analytics

AutoML PipelineBETA

Automated machine learning from operational data to deployed edge model. Feature engineering, architecture search, A/B testing, and one-click deployment -- no data scientist required.

AUTOML PIPELINERUNNINGDATARUNNING...FEATURESPENDINGTRAININGPENDINGEVALPENDINGDEPLOYPENDINGDATA SOURCETelemetryDBtelemetry_rawRows: 0Features: 24 sensorsWindow: 90 daysFEATURE SELECTIONvib_rms92%temp_delta87%fft_peak_181%power_ratio74%bearing_env68%MODEL TRAININGEpoch 0/200XGBoost0%acc:94.2%loss:0.0800LSTM20%acc:93.2%loss:0.0784IsoForest40%acc:88.4%loss:0.0691LOSS CURVEEVALUATIONMODELAUCF1COSTXGBoost0.9670.9422msBESTLSTM0.9510.9188msIsoForest0.9230.8951msCONFUSION MATRIXPred +Pred -Act +Act -8471223918Precision:98.6%Recall:97.4%DEPLOY TO EDGE FLEETFormat: ONNX INT8 | Size: 2.4MBautoml@twinedge:~$ pipeline run --config pump_anomaly_v3 [Step 1/5] Data curation: 247,391 rows, 90-day window [Step 2/5] Feature engineering: 24 sensors -> 120 features [Step 3/5] Hyperparameter search: 200 trials (Bayesian) [Step 3/5] Best: XGBoost lr=0.042 depth=8 n_est=340 [Step 4/5] Validation AUC: 0.967 | F1: 0.942 | Precision: 98.6% [Step 4/5] Drift check: KL=0.012 PSI=0.008 (within bounds) [Step 5/5] ONNX export: INT8 quantized, 2.4MB [Step 5/5] Deploying to 47 edge devices (staged rollout)...

Pipeline Stages

Six automated stages from raw telemetry to production edge model.

01

Data Curation

Automatic selection of training windows from operational history. Outlier removal, class balancing, and feature normalization handled without manual intervention.

02

Feature Engineering

Time-domain statistics, frequency-domain features (FFT, wavelet), rolling windows, and cross-sensor correlations generated from raw telemetry.

03

Model Selection

Bayesian hyperparameter search across Isolation Forest, XGBoost, LSTM, and Autoencoder architectures. Best model selected by validation AUC and inference cost.

04

A/B Testing

Deploy candidate model alongside production baseline. Route 10% of inference traffic to the challenger. Promote automatically if precision improves by 2%+.

05

Edge Deployment

One-click ONNX export with INT8 quantization. Staged rollout to edge fleet with automatic rollback if prediction quality degrades.

06

Continuous Monitoring

Drift detection on input distributions and prediction confidence. Triggers automatic retraining when model accuracy drops below configured threshold.

Pipeline Specifications

Training DataAuto-curated from time-series telemetry
Feature Library120+ pre-built industrial features
Model ArchitecturesIsolation Forest, XGBoost, LSTM, Autoencoder, Random Forest
Hyperparameter SearchBayesian optimization, 200 trials
Export FormatONNX with INT8/FP16 quantization
A/B Test DurationConfigurable, default 7 days
Drift DetectionKL divergence + PSI on input features
Retraining TriggerAccuracy drop > 3% or manual

ML Models That Improve Themselves

Point the pipeline at your operational data and let it build, test, and deploy production-grade models to every edge device in your fleet.