In Machine Learning (ML), a hyperparameter (HP) is a user-defined value that remains fixed during training and directly influences the performance of the ML model. We can tune the HPs either manually or automatically. However, it is impractical to perform this task by hand in most cases.
Kale orchestrates Katib and Kubeflow Pipeline (KFP) experiments to automate the process of HP tuning. Katib supports running simple Jobs (that is, Pods) as Trials, but Kale implements a shim to have the Trials run pipelines in KFP. When the pipelines run to completion, Kale collects the metrics from each execution. This way, it enables you to optimize models using HP tuning in the familiar KFP environment.
What You’ll Need¶
- An EKF or MiniKF deployment.
Choose one of the following options, based on how you are using Kale.