PipelineAI is a broad product with these high-level features:
- Model Training
- Model Optimization
- Model Inference / Prediction
- Model and Experiment Management
- Live Traffic Shifting
- Multi-Armed Bandit and A/B Tests
- End-to-End Pipeline Optimization
PipelineAI Server Installation requires only Docker. This includes a single-container for model prediction or model training.
You can follow the PipelineAI Docker Quick Start to setup the server on your local machine using Docker including Rasberry Pi, Nvidia Jetson, etc.
PipelineAI Cluster Installation requires a Docker Orchestrator like Docker Swarm, Mesos, and Kubernetes. PipelineAI Cluster unlocks more-advanced cluster features including auto-scaling model predictions, distributed model training, hybrid-cloud fault-tolerance, and multi-armed bandit experiments in live production.
You can follow the PipelineAI Kubernetes Quick Start to setup the cluster in any environment that supports Kubernetes including AWS, Azure, Google Cloud, OpenStack, etc.
Logging and Metrics
To simplify the operational complexity of PipelineAI, we've abstracted away the Docker Orchestration layer. However, we do expose all logs and metrics - and integrate with your existing Enterprise Logging, Metrics, and Alerting subsystems. We have connectors to DataDog, CloudWatch, PagerDuty, Prometheus, Graphite, Stackdriver (Google Cloud), and many more.