Deploy machine learning models in the cloud or on-premise. Get metrics and ensure proper governance and compliance for your running ML models. Create powerful inference graphs made up of multiple components. Provide a consistent serving layer for models built using heterogeneous ML toolkits.
Seamless ML model deployment
A single layer to manage all your ML deployments that serves models built in any open-source or commercial model building framework. You can make use of powerful Kubernetes features like custom resource definitions to manage model graphs.
Entirely famework and language agnostic
Supports all your preferred ML libraries, toolkits and languages and easily connect your continuous integration and deployment (CI/CD) tools to scale and update your deployment and run on-premise or in your preferred cloud.
Seldon Core enables your most advanced deployments with experiments, ensembles and transformers and ensures proper governance and compliance for your ML models.
Full lifecycle management
Provides you with end-to-end control, including updating, scaling, monitoring and compliance. Production rollouts via Canaries or Shadow deployments are also possible. Optimise your model deployment with a comprehensive overview of the status of your models.
Want more product information? Explore detailed information about using this product and where to find additional help.