Create parameterized pipelines

This guide will walk you through parameterizing a Kubeflow Pipeline using the Kale SDK.

What You’ll Need

  • An EKF or MiniKF deployment with the default Kale Docker image.
  • An understanding of how Kale SDK works.


  1. Create a new Notebook server using the default Kale Docker image. The image will have the following naming scheme:<IMAGE_TAG>


    The <IMAGE_TAG> varies based on the MiniKF or EKF release.

  2. Connect to the server, open a terminal, and install scikit-learn:

    $ pip3 install --user scikit-learn==0.23.0
  3. Create a new python file and name it

    $ touch
  4. Copy and paste the following code inside

    # Copyright © 2021 Arrikto Inc.  All Rights Reserved.
    """Kale SDK.
    This script trains an ML pipeline to solve a binary classification task.
    from kale.sdk import pipeline, step
    from sklearn.datasets import make_classification
    from sklearn.linear_model import LogisticRegression
    from sklearn.model_selection import train_test_split
    def load(random_state):
        """Create a random dataset for binary classification."""
        rs = int(random_state)
        x, y = make_classification(random_state=rs)
        return x, y
    def split(x, y):
        """Split the data into train and test sets."""
        x, x_test, y, y_test = train_test_split(x, y, test_size=0.1)
        return x, x_test, y, y_test
    def train(x, x_test, y, training_iterations):
        """Train a Logistic Regression model."""
        iters = int(training_iterations)
        model = LogisticRegression(max_iter=iters), y)
    @pipeline(name="binary-classification", experiment="kale-tutorial")
    def ml_pipeline(rs=42, iters=100):
        """Run the ML pipeline."""
        x, y = load(rs)
        x, x_test, y, y_test = split(x, y)
        train(x, x_test, y, iters)
    if __name__ == "__main__":
        ml_pipeline(rs=42, iters=100)

    Alternatively, download the Python file.

    In this code sample, you start with a standard Python script that trains a Logistic Regression model. Moreover, you have decorated the functions using the Kale SDK. To read more about how to create this file, head to the corresponding Kale SDK user guide.

    The pipeline resulting from the compilation of the this Python script will have two parameters:

    • rs: to pass a random seed to the dataset generator, with a default value of 42
    • iters: to define the number of iterations for the model, with a default value of 100


    You should always provide default values for the parameters. These defaults will end up in the definition of the uploaded pipeline. You can override them by calling the pipeline function with new argument values, or set different values when creating a Run from the KFP UI. Head to the KFP macros guide to learn how to provide dynamic values as input to your pipelines.

  5. Run the script locally to test whether your code runs successfully using Kale’s marshalling mechanism:

    $ python3 -m kale
  6. (Optional) Produce a workflow YAML file that you can inspect:

    $ python3 -m kale --compile

    After the successful execution of this command, look for the workflow YAML file inside a .kale directory inside your working directory. This is a file that you could upload and submit to Kubeflow manually through its User Interface (KFP UI).

  7. Deploy and run your code as a KFP pipeline:

    $ python3 -m kale --kfp


    To see the complete list of arguments and their respective usage, run python3 -m kale --help.


You have successfully created a parameterized KFP Pipeline.

What’s Next

The next step is to create and log pipeline metrics.