Install Alauda AI

Alauda AI now offers flexible deployment options. Starting with Alauda AI 1.4, the Serverless capability is an optional feature, allowing for a more streamlined installation if it's not needed.

To begin, you will need to deploy the Alauda AI Operator. This is the core engine for all Alauda AI products. By default, it uses the KServe Raw Deployment mode for the inference backend, which is particularly recommended for resource-intensive generative workloads. This mode provides a straightforward way to deploy models and offers robust, customizable deployment capabilities by leveraging foundational Kubernetes functionalities.

If your use case requires Serverless functionality, which enables advanced features like scaling to zero on demand for cost optimization, you can optionally install the Knative CE Operator. This operator is not part of the default installation and can be added at any time to enable Serverless functionality.

INFO

Recommended deployment option: For generative inference workloads, the Raw Kubernetes Deployment approach is recommended as it provides the most control over resource allocation and scaling.

TOC

Downloading

Operator Components:

  • Alauda AI Operator

    Alauda AI Operator is the main engine that powers Alauda AI products. It focuses on two core functions: model management and inference services, and provides a flexible framework that can be easily expanded.

    Download package: aml-operator.xxx.tgz

  • Knative CE Operator

    Knative CE Operator provides serverless model inference.

    Download package: knative-operator.ALL.v1.x.x-yymmdd.tgz

INFO

You can download the app named 'Alauda AI' and 'Knative CE Operator' from the Marketplace on the Customer Portal website.

Uploading

We need to upload both Alauda AI and Knative CE Operator to the cluster where Alauda AI is to be used.

Downloading the violet tool

First, we need to download the violet tool if not present on the machine.

Log into the Web Console and switch to the Administrator view:

  1. Click Marketplace / Upload Packages.
  2. Click Download Packaging and Listing Tool.
  3. Locate the right OS / CPU architecture under Execution Environment.
  4. Click Download to download the violet tool.
  5. Run chmod +x ${PATH_TO_THE_VIOLET_TOOL} to make the tool executable.

Uploading package

Save the following script in uploading-ai-cluster-packages.sh first, then read the comments below to update environment variables for configuration in that script.

uploading-ai-cluster-packages.sh
#!/usr/bin/env bash
export PLATFORM_ADDRESS=https://platform-address  
export PLATFORM_ADMIN_USER=<admin>
export PLATFORM_ADMIN_PASSWORD=<admin-password>
export CLUSTER=<cluster-name>

export AI_CLUSTER_OPERATOR_NAME=<path-to-aml-operator-tarball>
export KNATIVE_CE_OPERATOR_PKG_NAME=<path-to-knative-operator-tarball>

VIOLET_EXTRA_ARGS=()
IS_EXTERNAL_REGISTRY=

# If the image registry type of destination cluster is not platform built-in (external private or public repository).
# Additional configuration is required (uncomment following line):
# IS_EXTERNAL_REGISTRY=true
if [[ "${IS_EXTERNAL_REGISTRY}" == "true" ]]; then
    REGISTRY_ADDRESS=<external-registry-url>
    REGISTRY_USERNAME=<registry-username>
    REGISTRY_PASSWORD=<registry-password>

    VIOLET_EXTRA_ARGS+=(
        --dst-repo "${REGISTRY_ADDRESS}"
        --username "${REGISTRY_USERNAME}"
        --password "${REGISTRY_PASSWORD}"
    )
fi

# Push **Alauda AI Cluster** operator package to destination cluster
violet push \
    ${AI_CLUSTER_OPERATOR_NAME} \
    --platform-address=${PLATFORM_ADDRESS} \
    --platform-username=${PLATFORM_ADMIN_USER} \
    --platform-password=${PLATFORM_ADMIN_PASSWORD} \
    --clusters=${CLUSTER} \
    ${VIOLET_EXTRA_ARGS[@]}

# Push **Knative CE Operator** package to destination cluster
violet push \
    ${KNATIVE_CE_OPERATOR_PKG_NAME} \
    --platform-address=${PLATFORM_ADDRESS} \
    --platform-username=${PLATFORM_ADMIN_USER} \
    --platform-password=${PLATFORM_ADMIN_PASSWORD} \
    --clusters=${CLUSTER} \
    ${VIOLET_EXTRA_ARGS[@]}
  1. ${PLATFORM_ADDRESS} is your ACP platform address.
  2. ${PLATFORM_ADMIN_USER} is the username of the ACP platform admin.
  3. ${PLATFORM_ADMIN_PASSWORD} is the password of the ACP platform admin.
  4. ${CLUSTER} is the name of the cluster to install the Alauda AI components into.
  5. ${AI_CLUSTER_OPERATOR_NAME} is the path to the Alauda AI Cluster Operator package tarball.
  6. ${KNATIVE_CE_OPERATOR_PKG_NAME} is the path to the Knative CE Operator package tarball.
  7. ${REGISTRY_ADDRESS} is the address of the external registry.
  8. ${REGISTRY_USERNAME} is the username of the external registry.
  9. ${REGISTRY_PASSWORD} is the password of the external registry.

After configuration, execute the script file using bash ./uploading-ai-cluster-packages.sh to upload both Alauda AI and Knative CE Operator.

Installing Alauda AI Operator

Procedure

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install Alauda AI.

  3. Select Alauda AI, then click Install.

    Install Alauda AI window will pop up.

  4. Then in the Install Alauda AI window.

  5. Leave Channel unchanged.

  6. Check whether the Version matches the Alauda AI version you want to install.

  7. Leave Installation Location unchanged, it should be aml-operator by default.

  8. Select Manual for Upgrade Strategy.

  9. Click Install.

Verification

Confirm that the Alauda AI tile shows one of the following states:

  • Installing: installation is in progress; wait for this to change to Installed.
  • Installed: installation is complete.

Creating Alauda AI Instance

Once Alauda AI Operator is installed, you can create an Alauda AI instance.

Procedure

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install the Alauda AI Operator.

  3. Select Alauda AI, then Click.

  4. In the Alauda AI page, click All Instances from the tab.

  5. Click Create.

    Select Instance Type window will pop up.

  6. Locate the AmlCluster tile in Select Instance Type window, then click Create.

    Create AmlCluster form will show up.

  7. Keep default unchanged for Name.

  8. Select Deploy Flavor from dropdown:

    1. single-node for non HA deployments.
    2. ha-cluster for HA cluster deployments (Recommended for production).
  9. Set KServe Mode to Managed.

  10. Input a valid domain for Domain field.

    INFO

    This domain is used by ingress gateway for exposing model serving services. Most likely, you will want to use a wildcard name, like *.example.com.

    You can specify the following certificate types by updating the Domain Certificate Type field:

    • Provided
    • SelfSigned
    • ACPDefaultIngress

    By default, the configuration uses SelfSigned certificate type for securing ingress traffic to your cluster, the certificate is stored in the knative-serving-cert secret that is specified in the Domain Certificate Secret field.

  11. In the Serverless Configuration section, set Knative Serving Provider to Operator; leave all other parameters blank.

  12. Under Gitlab section:

    1. Type the URL of self-hosted Gitlab for Base URL.
    2. Type cpaas-system for Admin Token Secret Namespace.
    3. Type aml-gitlab-admin-token for Admin Token Secret Name.
  13. Review above configurations and then click Create.

Verification

Check the status field from the AmlCluster resource which named default:

kubectl get amlcluster default

Should returns Ready:

NAME      READY   REASON
default   True    Succeeded

Now, the core capabilities of Alauda AI have been successfully deployed. If you want to quickly experience the product, please refer to the Quick Start.

Enabling Serverless Functionality

Serverless functionality is an optional capability that requires an additional operator and instance to be deployed.

1. Installing the Knative CE Operator

INFO

Starting from Knative CE Operator, the Knative networking layer switches to Kourier, so installing Istio is no longer required.

Procedure

In Administrator view:

  1. Click Marketplace / OperatorHub.

  2. At the top of the console, from the Cluster dropdown list, select the destination cluster where you want to install.

  3. Search for and select Knative CE Operator, then click Install.

    Install Knative CE Operator window will pop up.

  4. Then in the Install Knative CE Operator window.

  5. Leave Channel unchanged.

  6. Check whether the Version matches the Knative CE Operator version you want to install.

  7. Leave Installation Location unchanged.

  8. Select Manual for Upgrade Strategy.

  9. Click Install.

Verification

Confirm that the Knative CE Operator tile shows one of the following states:

  • Installing: installation is in progress; wait for this to change to Installed.
  • Installed: installation is complete.

2. Creating Knative Serving Instance

Once Knative CE Operator is installed, you need to create the KnativeServing instance manually.

Procedure

  1. Create the knative-serving namespace.

    kubectl create ns knative-serving
  2. In the Administrator view, navigate to Operators -> Installed Operators.

  3. Select the Knative CE Operator.

  4. Under Provided APIs, locate KnativeServing and click Create Instance.

  5. Switch to YAML view.

  6. Replace the content with the following YAML:

  7. Click Create.

    apiVersion: operator.knative.dev/v1beta1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec:
      config:
        deployment:
          registries-skipping-tag-resolving: kind.local,ko.local,dev.local,private-registry
        domain:
          example.com: ""
        features:
          kubernetes.podspec-affinity: enabled
          kubernetes.podspec-hostipc: enabled
          kubernetes.podspec-hostnetwork: enabled
          kubernetes.podspec-init-containers: enabled
          kubernetes.podspec-nodeselector: enabled
          kubernetes.podspec-persistent-volume-claim: enabled
          kubernetes.podspec-persistent-volume-write: enabled
          kubernetes.podspec-securitycontext: enabled
          kubernetes.podspec-tolerations: enabled
          kubernetes.podspec-volumes-emptydir: enabled
          queueproxy.resource-defaults: enabled
        network:
          domain-template: '{{.Name}}.{{.Namespace}}.{{.Domain}}'
          ingress-class: kourier.ingress.networking.knative.dev
      ingress:
        kourier:
          enabled: true
  1. private-registry is a placeholder for your private registry address. You can find this in the Administrator view, then click Clusters, select your cluster, and check the Private Registry value in the Basic Info section.

3. Integrate with AmlCluster

Configure the AmlCluster instance to integrate with the KnativeServing instance.

In the AmlCluster instance update window, you will need to fill in the required parameters in the Serverless Configuration section.

INFO

After the initial installation, you will find that only the Knative Serving Provider is set to Operator. You will now need to provide values for the following parameters:

  • APIVersion: operator.knative.dev/v1beta1
  • Kind: KnativeServing
  • Name: knative-serving
  • Namespace: knative-serving

Replace GitLab Service After Installation

If you want to replace GitLab Service after installation, follow these steps:

  1. Reconfigure GitLab Service
    Refer to the Pre-installation Configuration and re-execute its steps.

  2. Update Alauda AI Instance

    • In Administrator view, navigate to Marketplace > OperatorHub
    • From the Cluster dropdown, select the target cluster
    • Choose Alauda AI and click the All Instances tab
    • Locate the 'default' instance and click Update
  3. Modify GitLab Configuration
    In the Update default form:

    • Locate the GitLab section
    • Enter:
      • Base URL: The URL of your new GitLab instance
      • Admin Token Secret Namespace: cpaas-system
      • Admin Token Secret Name: aml-gitlab-admin-token
  4. Restart Components
    Restart the aml-controller deployment in the kubeflow namespace.

  5. Refresh Platform Data
    In Alauda AI management view, re-manage all namespaces.

    • In Alauda AI view, navigate to Admin view from Business View
    • On the Namespace Management page, delete all existing managed namespaces
    • Use "Managed Namespace" to add namespaces requiring Alauda AI integration
    INFO

    Original models won't migrate automatically Continue using these models:

    • Recreate and re-upload in new GitLab OR
    • Manually transfer model files to new repository