We have hosted the application kserve in order to run this application in our online workstations with Wine or directly.
Quick description about kserve:
KServe provides a Kubernetes Custom Resource Definition for serving machine learning (ML) models on arbitrary frameworks. It aims to solve production model serving use cases by providing performant, high abstraction interfaces for common ML frameworks like Tensorflow, XGBoost, ScikitLearn, PyTorch, and ONNX. It encapsulates the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU Autoscaling, Scale to Zero, and Canary Rollouts to your ML deployments. It enables a simple, pluggable, and complete story for Production ML Serving including prediction, pre-processing, post-processing and explainability. KServe is being used across various organizations.Features:
- KServe is a standard, cloud agnostic Model Inference Platform on Kubernetes, built for highly scalable use cases
- Provides performant, standardized inference protocol across ML frameworks
- Support modern serverless inference workload with request based autoscaling including scale-to-zero on CPU and GPU
- Provides high scalability, density packing and intelligent routing using ModelMesh
- Simple and pluggable production serving for inference, pre/post processing, monitoring and explainability
- Advanced deployments for canary rollout, pipeline, ensembles with InferenceGraph
Programming Language: Python.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.