Kubeflow is an open-source platform that helps develop, deploy, and manage ML workflows on Kubernetes’ environment locally, on-premises, or in the cloud. The platform can be used during each stage of the ML workflow, including data preparation, model training, prediction serving, and service management.
Kubeflow creates, deploys, and manages Jupyter notebooks, provides ML model training via TensorFlow training job operator, and exports trained TensorFlow models to Kubernetes via TensorFlow Serving. Kubeflow integrates with several platforms such as Seldon Core, an open-source platform to deploy ML models on Kubernetes, NVIDIA Triton Inference Server to maximize GPU utilization when deploying ML/DL models, and MLRun Serving, an open-source serverless framework to deploy and monitor ML/DL pipelines in real time
By using this site, you agree to allow SPEEDA Edge and our partners to use cookies for analytics and personalization. Visit our privacy policy for more information about our data collection practices.