Phase 3: Evaluation of ML pipelines
In the next step, appliedAI provided the tools partly on the Google Cloud Kubernetes Engine and partly on the in-house Kubernetes cluster of appliedAI. This process was supported by a cross-functional project team with ML and DevOps experts.
Subsequently, appliedAI evaluated the five ML tools or integrated pipelines from the shortlist. For this purpose, the team compiled a set of questions that should be answered when evaluating the pipeline. The criteria were considered at the tool and pipeline level to obtain a holistic view of the pipelines used.
Additionally, the teams from Wacker and Infineon were able to access the tools on the infrastructure of appliedAI. Thus, they were able to try out the tools themselves and evaluate them independently. Throughout the process, appliedAI supported the teams from Wacker and Infineon.
The catalog of questions supplemented the catalog of criteria to be evaluated. The diverse capabilities of ML pipelines were then tested using five compiled ML workflows. The five workflows evaluated were training, storage, monitoring, scalability, and AutoML.
In the final step, we discussed the findings and results of the evaluation in the joint project team with Wacker and Infineon. During the entire project, the project team, consisting of experts from appliedAI, Wacker and Infineon, met regularly for project meetings. In these meetings, concrete feedback was given on work steps in the sense of an agile process, and changes were then incorporated in a consistent, goal-oriented and flexible manner - in line with the results of the discussions and the requirements.