Serve Custom Models

Attention

This feature is a Tech Preview, so it is not fully supported by Arrikto and may not be functionally complete. While it is not intended for production use, we encourage you to try it out and provide us feedback.

Read our Tech Preview Policy for more information.

Serving custom models is about building custom model servers when off-the-shelf model servers do not fit your needs. You can package the custom model servers you create in docker images and deploy them using KServe.

Kale exposes a serve API that allows you to create an InferenceService by

  • combining a Kubeflow artifact ID for the predictor component with a docker image for the transformer component, and vice versa.
  • using a docker image that packages the model and its dependencies for the predictor component, and - if needed - a docker image that packages the transformer component and its dependencies.
  • passing a full container spec to configure the docker images, for both the predictor and the transformer components.

The following guides will walk you through using the Kale serve API to instantly serve custom models without having to worry about writing your own .yaml files or building docker images for everything, given that you can reuse the Kubeflow artifacts you have already created.