Serve Model from Notebook

In this section, you will serve a trained Machine Learning (ML) model using Kale and KServe.

Warning

This API is deprecated. We recommend you to serve a model by first submitting it as an MLMD artifact. Read the Serve SKLearn Models guide to implement this.

What You’ll Need

  • An Arrikto EKF or MiniKF deployment with the default Kale Docker image.

Procedure

  1. Create a new notebook server using the default Kale Docker image. The image will have the following naming scheme:

    gcr.io/arrikto/jupyter-kale-py38:<IMAGE_TAG>

    Note

    The <IMAGE_TAG> varies based on the MiniKF or Arrikto EKF release.

  2. Create a new Jupyter notebook (that is, an IPYNB file):

    ../../../_images/ipynb2.png
  3. Copy and paste the import statements in the first code cell, then run it:

    import json from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from kale.common.serveutils import serve

    This is how your notebook cell will look like:

    ../../../_images/imports3.png
  4. Load, split, and transform your dataset in a different code cell. Then run it:

    def load(random_state): x, y = make_classification(random_state=random_state) return x, y def split(x, y): x, x_test, y, y_test = train_test_split(x, y, test_size=0.1) return x, x_test, y, y_test def process(x): scaler = StandardScaler() x = scaler.fit_transform(x) return x, scaler

    This is how your notebook cell will look like:

    ../../../_images/processing.png

    Note

    In the process function, we return the processed data as well as the scaler object. You will need the scaler for launching a KServe transformer component.

  5. Add the data processing function that you want to turn into a KServe transformer component in a different code cell and run it:

    def process_raw(inputs): import numpy as np res = list() for instance in inputs["instances"]: processed_input = scaler.transform(np.array(instance)[None,:]) res.append(processed_input.squeeze().tolist()) return {**inputs, "instances": res}

    This is how your notebook cell will look like:

    ../../../_images/transformer.png

    Note

    When KServe feeds your data to the transformer component, it will pass the data one example at a time, as a plain Python list. So, to make this work, you need to cast the example as a NumPy array and expand its dimensions, because the scaler function expects a 2D array. Finally, you need to turn it into a Python list and reduce the dimesions again, before returning the processed data.

  6. Create a function to train your model in the next code cell and run it:

    def train(x, y, training_iterations): model = LogisticRegression(max_iter=training_iterations) model.fit(x, y) return model

    This is how your notebook cell will look like:

    ../../../_images/training.png
  7. Call the functions to bring everything together in a different code cell and run it:

    # load the data x, y = load(42) x_raw, x_test_raw, y, y_test = split(x, y) # process the data x, scaler = process(x_raw) # train the model model = train(x, y, 1000)

    This is how your notebook cell will look like:

    ../../../_images/run1.png
  8. In a different code cell, call the serve function and pass the trained model, the preprocessing function, and its dependencies as arguments. Then, run the cell:

    kfserver = serve(model, preprocessing_fn=process_raw, preprocessing_assets={'scaler': scaler})

    This is how your notebook cell will look like:

    ../../../_images/serve.png

    Inside the serve function you first pass the model you have trained. This way you are instructing Kale to create a new InferenceService which will serve your model in its predictor component. Kale infers the type of predictor and creates the corresponding service.

    Moreover, if you pass a preprocessing function (preprocessing_fn) Kale will also include a transformer component for the InferenceService, which will transform your data before passing it to the predictor component. Note that you should explicitly pass any global variable that the preprocessing function depends on as an asset (preprocessing_assets). In this case, the preprocessing function depends on the scaler object. To this end, we pass a Python dictionary, where the key matches the name of the variable that the preprocessing function depends on and the value is the actual object.

  9. Invoke the server to get predictions in a different code cell and run it:

    data = json.dumps({"instances": x_test_raw.tolist()}) predictions = kfserver.predict(data)

    This is how your notebook cell will look like:

    ../../../_images/predict.png

Summary

You have successfully trained a model, served it with Kale by deploying a KServe InferenceService which consists of a transformer and a predictor component, and invoked the service with a test dataset.

What’s Next

Check out the rest of the Kale user guides for serving.