The Container Service Extension 4.0 has been launched with a number of vital enhancements and extra use instances, together with Cluster API, lifecycle administration by means of a person interface, GPU help for Kubernetes clusters, and integration with VMware Cloud Director as infrastructure. With its feature-rich person interface, clients can carry out operations equivalent to creation, scaling, and upgrading on Tanzu Kubernetes clusters. However, some clients could search automation help for these identical operations.
This weblog publish is meant for purchasers who wish to automate the provisioning of Tanzu Kubernetes clusters on the VMware Cloud Director Tenant portal utilizing the VMware Cloud Director API. Although the VCD API is supported, the weblog publish is critical as a result of the Cluster API is used to create and handle TKG clusters on VCD. The payload required to carry out operations on TKG clusters requires some work to supply the Cluster API-generated payload. The weblog publish outlines the step-by-step course of for producing the right payload for purchasers utilizing their VCD infrastructure.
Version Support:
This API information is relevant to clusters created by CSE 4.0 and CSE 4.0.1 Tanzu Kubernetes Clusters.
The current stipulations for purchasers to create TKG clusters of their organizations additionally apply to the automation stream. These stipulations are summarized right here and may be discovered within the official documentation to onboard Provider and Tenant Admin customers. The following sections present an summary of the necessities for each cloud supplier directors and Tenant Admin customers.
Cloud Provider Admin Steps
The Steps to onboard the shoppers is demonstrated on this video and documented right here. Once buyer group and its customers are onboarded, they’ll use subsequent part to make use of APIs, or eat it to create automated Cluster operations.
As a fast abstract following steps are anticipated to be carried out by cloud supplier to onboard and put together the shopper:
- Review Interoperability Matrix to help Container Service Extension 4.0 and 4.0.1
- Allow needed communication for CSE server
- Start CSE server and Onboard buyer group (Reference Demo and Official Documentation)
Customer Org Admin Steps
When the cloud supplier has onboarded the shopper onto the Container Service Extension, the group administrator should create and assign customers with the potential to create and handle TKG clusters for the shopper group. This documentation outlines the process for making a person with the “Kubernetes cluster author” function inside the tenant group.
It is then assumed that the person “acmekco” has obtained the required assets and entry inside the buyer group to execute Kubernetes cluster operations.
Generate ‘capiyaml’ payload
- Collect VCD Infrastructure and Kubernetes Cluster particulars
This Operation requires following data for VCD tenant portal. The proper column describes instance values used as reference on this weblog publish.
Input | Example worth for this weblog |
VCD_SITE | VCD Address (https://vcd-01a.local) |
VCD_ORGANIZATION | Customer Organization title(ACME) |
VCD_ORGANIZATION_VDC | Customer OVDC title (ACME_VDC_T) |
VCD_ORGANIZATION_VDC_NETWORK | Network title in buyer org (172.16.2.0) |
VCD_CATALOG | CSE shared catalog title (cse) |
Input | Example worth for this weblog |
VCD_TEMPLATE_NAME | Kubernetes and TKG model of the cluster(Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1) |
VCD_CONTROL_PLANE_SIZING_POLICY | Sizing coverage of management aircraft vms(TKG small) |
VCD_CONTROL_PLANE_STORAGE_PROFILE | Storage profile for management aircraft of the cluster (Capacity) |
VCD_CONTROL_PLANE_PLACEMENT_POLICY | Optional – Leave empty if not utilizing |
VCD_WORKER_SIZING_POLICY | Sizing coverage of employee nodes vms(TKG small) |
VCD_WORKER_PLACEMENT_POLICY | Optional – Leave empty if not utilizing |
VCD_WORKER_STORAGE_PROFILE | Storage profile for management aircraft of the cluster (Capacity) |
CONTROL_PLANE_MACHINE_COUNT | 1 |
WORKER_MACHINE_COUNT | 1 |
VCD_REFRESH_TOKEN_B64 | “MHB1d0tXSllVb2twU2tGRjExNllCNGZnVWZqTm5UZ2U=” Ref VMware Doc to Generate token earlier than remodeling it to Base64 |
- Install required instruments to generate the
capiyaml
. User can use any Operating System or a Virtual Machine(together with Linux, Mac or Windows) to generate the payload. - Once the tenant person has collected all the knowledge, person should set up following parts equivalent to Clusterctl 1.1.3, Kind(0.17.0), and Docker (20.10.21) on finish person’s machine. The following step requires above collected data, and never the entry to VCD Infrastructure to generate capiyaml payload.
- Copy TKG CRS Files regionally. Incase the TKG model is lacking from the folder, be sure you have the templates created for the specified TKG variations. The Following desk offers supported listing of and many others, coredns, tkg, tkr variations for CSE 40 and CSE 4.0.1 launch. Alternatively this script to fetch the identical values from Tanzu Kubernetes Grid assets.
Kubernetes Version | Etcd ImageTag | CoreDNS ImageTag | Complete Unique Version | OVA | TKG Product Version | TKr model |
v1.22.9+vmware.1 | v3.5.4_vmware.2 | v1.8.4_vmware.9 | v1.22.9+vmware.1-tkg.1 | ubuntu-2004-kube-v1.22.9+vmware.1-tkg.1-2182cbabee08edf480ee9bc5866d6933.ova | 1.5.4 | v1.22.9—vmware.1-tkg.1 |
v1.21.11+vmware.1 | v3.4.13_vmware.27 | v1.8.0_vmware.13 | v1.21.11+vmware.1-tkg.2 | ubuntu-2004-kube-v1.21.11+vmware.1-tkg.2-d788dbbb335710c0a0d1a28670057896.ova | 1.5.4 | v1.21.11—vmware.1-tkg.3 |
v1.20.15+vmware.1 | v3.4.13_vmware.23 | v1.7.0_vmware.15 | v1.20.15+vmware.1-tkg.2 | ubuntu-2004-kube-v1.20.15+vmware.1-tkg.2-839faf7d1fa7fa356be22b72170ce1a8.ova | 1.5.4 | v1.20.15—vmware.1-tkg.2 |
mkdir ~/infrastructure–vcd/ cd ~/infrastructure–vcd mkdir v1.0.0 cd v1.0.0 |
crs % ls -lrta
whole 0
drwxr-xr-x 6 bhatts employees 192 Jan 30 16:42 .
drwxr-xr-x 4 bhatts employees 128 Jan 30 16:42 tanzu
drwxr-xr-x 4 bhatts employees 128 Jan 30 16:51 cni
drwxr-xr-x 4 bhatts employees 128 Jan 30 16:54 cpi
drwxr-xr-x 6 bhatts employees 192 Jan 30 16:55 csi
drwxr-xr-x 13 bhatts employees 416 Jan 30 18:53 ..
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
v1.0.0% ls –lrta whole 280 drwxr–xr–x 3 bhatts employees 96 Jan 30 16:41 .. drwxr–xr–x 6 bhatts employees 192 Jan 30 16:42 crs –rw–r—r— 1 bhatts employees 9073 Jan 30 16:56 cluster–template–v1.20.8–crs.yaml –rw–r—r— 1 bhatts employees 9099 Jan 30 16:56 cluster–template–v1.20.8.yaml –rw–r—r— 1 bhatts employees 9085 Jan 30 16:57 cluster–template–v1.21.8–crs.yaml –rw–r—r— 1 bhatts employees 9023 Jan 30 16:57 cluster–template–v1.21.8.yaml –rw–r—r— 1 bhatts employees 9081 Jan 30 16:57 cluster–template–v1.22.9–crs.yaml –rw–r—r— 1 bhatts employees 9019 Jan 30 16:57 cluster–template–v1.22.9.yaml –rw–r—r— 1 bhatts employees 9469 Jan 30 16:57 cluster–template.yaml –rw–r—r— 1 bhatts employees 45546 Jan 30 16:58 infrastructure–parts.yaml –rw–r—r— 1 bhatts employees 165 Jan 30 16:58 metadata.yaml –rw–r—r— 1 bhatts employees 3355 Jan 30 18:53 clusterctl.yaml drwxr–xr–x 13 bhatts employees 416 Jan 30 18:53 .
crs % ls –lrta whole 0 drwxr–xr–x 6 bhatts employees 192 Jan 30 16:42 . drwxr–xr–x 4 bhatts employees 128 Jan 30 16:42 tanzu drwxr–xr–x 4 bhatts employees 128 Jan 30 16:51 cni drwxr–xr–x 4 bhatts employees 128 Jan 30 16:54 cpi drwxr–xr–x 6 bhatts employees 192 Jan 30 16:55 csi drwxr–xr–x 13 bhatts employees 416 Jan 30 18:53 .. |
- Copy the
~/infrastructure-vcd/v1.0.0/clusterctl.yaml
to~/.cluster-api/clusterctl.yaml.
- The ‘
clusterctl
‘ command makes use ofclusterctl.yaml
from~/.cluster-api/clusterctl.yaml
to create the capiyaml payload. Update the infrastructure particulars from step one on this doc. - Update the
suppliers.url
in~/.cluster-api/clusterctl.yaml
to~/infrastructure-vcd/v1.0.0/infrastructure-components.yaml.
suppliers: – title: “vcd” url: “~/infrastructure-vcd/v1.0.0/infrastructure-components.yaml” sort: “InfrastructureProvider” |
At this level, we are going to want a form cluster to put in clusterctl to generate the payload. In this step, create Kind cluster to generate capiyaml payload and initialize clusterctl as follows:
Create an area cluster on mac // This may be equally executed on alternative of your working system.
variety create cluster –config kind-cluster-with-extramounts.yaml
kubectl cluster-info –context kind-kind
kubectl config set-context kind-kind
kubectl get po -A -owide
clusterctl init –core cluster-api:v1.1.3 -b kubeadm:v1.1.3 -c kubeadm:v1.1.3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
cat > variety–cluster–with–extramounts.yaml <<EOF variety: Cluster apiVersion: variety.x–k8s.io/v1alpha4 nodes: – function: management–aircraft extraMounts: – hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock EOF
Create a native cluster on mac // This may be equally executed on alternative of your working system.
variety create cluster —config variety–cluster–with–extramounts.yaml kubectl cluster–information —context variety–variety kubectl config set–context variety–variety kubectl get po –A –owide clusterctl init —core cluster–api:v1.1.3 –b kubeadm:v1.1.3 –c kubeadm:v1.1.3 |
Update the beneath tkg labels to “Kind: Cluster” object and annotations.
apiVersion: cluster.x-k8s.io/v1beta1
variety: Cluster
metadata:
labels:
ccm: exterior
cni: antrea
csi: exterior
title: api5
namespace: default
New Metadata:
apiVersion: cluster.x-k8s.io/v1beta1
variety: Cluster
metadata:
labels:
cluster-role.tkg.tanzu.vmware.com/administration: “”
tanzuKubernetesLaunch: v1.21.8—vmware.1-tkg.2
tkg.tanzu.vmware.com/cluster-name: api5
annotations:
osInfo: ubuntu,20.04,amd64
TKGVERSION: v1.4.3
title: api5
namespace: api5-ns
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
OLD Metadata:
apiVersion: cluster.x–k8s.io/v1beta1 variety: Cluster metadata: labels: ccm: exterior cni: antrea csi: exterior title: api5 namespace: default
New Metadata:
apiVersion: cluster.x–k8s.io/v1beta1 variety: Cluster metadata: labels: cluster–function.tkg.tanzu.vmware.com/administration: “” tanzuKubernetesLaunch: v1.21.8—vmware.1–tkg.2 tkg.tanzu.vmware.com/cluster–title: api5 annotations: osInfo: ubuntu,20.04,amd64 TKGVERSION: v1.4.3 title: api5 namespace: api5–ns |
- At this level, the capiyaml is able to be consumed by VCD APIs to carry out varied operations. For verification, make certain cluster title, namespace values are constant. Copy the content material of capiyaml to generate jsonstring utilizing related instrument as right here.
Following part describes all supported API operations for Tanzu Kubernetes Cluster on VMware Cloud Director:
List Clusters
List all clusters within the buyer group. for CSE 4.0 launch the CAPVCD model is 1.
GET https://{{vcd}}/cloudapi/1.0.0/entities/sorts/vmware/capvcdCluster/1 |
Info Cluster
Filter Cluster by title
GET https://{{vcd}}/cloudapi/1.0.0/entities/sorts/vmware/capvcdCluster/1?filter=title==clustername |
Get cluster by ID:
GET https://{{vcd}}/cloudapi/1.0.0/entities/id |
Get Kubeconfig of the cluster:
GET https://{{vcd}}/cloudapi/1.0.0/entities/id |
The Kubeconfig may be discovered as follows at: entity.standing.capvcd.personal.kubeconfig
Create a brand new Cluster
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
POST https://{{vcd}}/cloudapi/1.0.0/entityTypes/urn:vcloud:sort:vmware:capvcdCluster:1.1.0 “entityType”: “urn:vcloud:sort:vmware:capvcdCluster:1.1.0”, “title”: “demo”, “externalId”: null, “entity”: { “variety”: “CAPVCDCluster”, “spec”: { “vcdKe”: { “isVCDKECluster”: true, “markForDelete”: false, “driveDelete”: false, “autoRepairOnErrors”: true }, “capiYaml”: “”apiVersion: cluster.x–k8s.io/v1beta1nkind: Clusternmetadata:n labels:n cluster–function.tkg.tanzu.vmware.com/administration: ““n tanzuKubernetesLaunch: v1.22.9—vmware.1–tkg.2n tkg.tanzu.vmware.com/cluster–title: api4n title: api4n namespace: api4–nsn annotations:n osInfo: ubuntu,20.04,amd64n TKGVERSION: v1.5.4nspec:n clusterNetwork:n pods:n cidrBlocks:n – 100.96.0.0/11n serviceDomain: cluster.nativen companies:n cidrBlocks:n – 100.64.0.0/13n managementPlaneRef:n apiVersion: controlplane.cluster.x–k8s.io/v1beta1n variety: KubeadmControlPlanen title: api4–management–aircraftn namespace: api4–nsn infrastructureRef:n apiVersion: infrastructure.cluster.x–k8s.io/v1beta1n variety: VCDClustern title: api4n namespace: api4–nsn—napiVersion: v1ndata:n password: ““n refreshToken: WU4zdWY3b21FM1k1SFBXVVp6SERTZXZvREFSUXQzTlE=n username: dG9ueQ==nkind: Secretnmetadata:n title: capi–person–credentialsn namespace: api4–nsntype: Opaquen—napiVersion: infrastructure.cluster.x–k8s.io/v1beta1nkind: VCDClusternmetadata:n title: api4n namespace: api4–nsnspec:n loadBalancerConfigSpec:n vipSubnet: ““n org: starkn ovdc: vmware–cloudn ovdcNetwork: personal–snatn website: https://vcd.tanzu.labn useAsAdministrationCluster: falsen userContext:n secretRef:n title: capi-user-credentialsn namespace: api4-nsn—napiVersion: infrastructure.cluster.x-k8s.io/v1beta1nkind: VCDMachineTemplatenmetadata:n title: api4-control-planen namespace: api4-ns nspec:n template:n spec:n catalog: CSE-Templatesn diskSize: 20Gin enableNvidiaGPU: falsen placementPolicy: nulln sizingPolicy: TKG smalln storageProfile: lab-shared-storagen template: Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1n—napiVersion: controlplane.cluster.x-k8s.io/v1beta1nkind: KubeadmControlPlanenmetadata:n title: api4-control-planen namespace: api4-nsnspec:n kubeadmConfigSpec:n clusterConfiguration:n apiServer:n certSANs:n – localhostn – 127.0.0.1n controllerManager:n extraArgs:n enable-hostpath-provisioner: “true”n dns:n imageRepository: initiatives.registry.vmware.com/tkgn imageTag: v1.8.4_vmware.9n etcd:n native:n imageRepository: initiatives.registry.vmware.com/tkgn imageTag: v3.5.4_vmware.2n imageRepository: initiatives.registry.vmware.com/tkgn initConfiguration:n nodeRegistration:n criSocket: /run/containerd/containerd.sockn kubeletExtraArgs:n cloud-provider: externaln eviction-hard: nodefs.accessible<0%,nodefs.inodesFree<0%,imagefs.accessible<0percentn joinConfiguration:n nodeRegistration:n criSocket: /run/containerd/containerd.sockn kubeletExtraArgs:n cloud-provider: externaln eviction-hard: nodefs.accessible<0%,nodefs.inodesFree<0%,imagefs.accessible<0percentn customers:n – title: rootn sshAuthorizedKeys:n – “”n machineTemplate:n infrastructureRef:n apiVersion: infrastructure.cluster.x-k8s.io/v1beta1n variety: VCDMachineTemplaten title: api4-control-planen namespace: api4-nsn replicas: 1n model: v1.22.9+vmware.1n—napiVersion: infrastructure.cluster.x-k8s.io/v1beta1nkind: VCDMachineTemplatenmetadata:n title: api4-md-0n namespace: api4-nsnspec:n template:n spec:n catalog: CSE-Templatesn diskSize: 20Gin enableNvidiaGPU: falsen placementPolicy: nulln sizingPolicy: TKG smalln storageProfile: lab-shared-storagen template: Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1n—napiVersion: bootstrap.cluster.x-k8s.io/v1beta1nkind: KubeadmConfigTemplatenmetadata:n title: api4-md-0n namespace: api4-nsnspec:n template:n spec:n joinConfiguration:n nodeRegistration:n criSocket: /run/containerd/containerd.sockn kubeletExtraArgs:n cloud-provider: externaln eviction-hard: nodefs.accessible<0%,nodefs.inodesFree<0%,imagefs.accessible<0percentn customers:n – title: rootn sshAuthorizedKeys:n – “”n—napiVersion: cluster.x-k8s.io/v1beta1nkind: MachineDeploymentnmetadata:n title: api4-md-0n namespace: api4-nsnspec:n clusterName: api4n replicas: 1n selector:n matchLabels: nulln template:n spec:n bootstrap:n configRef:n apiVersion: bootstrap.cluster.x-k8s.io/v1beta1n variety: KubeadmConfigTemplaten title: api4-md-0n namespace: api4-nsn clusterName: api4n infrastructureRef:n apiVersion: infrastructure.cluster.x-k8s.io/v1beta1n variety: VCDMachineTemplaten title: api4-md-0n namespace: api4-nsn model: v1.22.9+vmware.1n” }, “apiVersion”: “capvcd.vmware.com/v1.1” } } |
Resize a Cluster
GET https://{{vcd}}/cloudapi/1.0.0/entities/sorts/vmware/capvcdCluster/1?filter=title==clustername |
- Fetch the Cluster ID(
"id": "urn:vcloud:entity:vmware:capvcdCluster:<ID>
) from the above API name’s output. - Copy the entire output of the API response.
- Notedown eTag Value from API response header
- Modify “capiyaml” with following values:
- To resize Control Plane VMs Modify
kubeadmcontrolplane.spec.replicas
with desired variety of management aircraft vms. Note solely odd numbers of management aircraft are supported. - To resize Worker Plane VMS Modify
MachineDeployment.spec.replicas
with desired variety of employee aircraft VMs
- To resize Control Plane VMs Modify
- While performing the
PUT
API name, guarantee to incorporate fetched eTag worth as If-Match
PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET API response} headers: Accept: software/json; worth=37.0 Authorization: Bearer Token {token} If–Match: {eTag worth from earlier GET name} BODY: Copy complete physique from the earlier GET name, modify capiyaml values as described in above Modify step. |
Upgrade a Cluster
To Upgrade a cluster, Provider admin must publish desired the Tanzu Kubernetes templates to the shopper group in catalog utilized by Container Service Extension.
acquire the GET API response for the cluster to be upgraded as follows:
GET https://{{vcd}}/cloudapi/1.0.0/entities/sorts/vmware/capvcdCluster/1?filter=title==clustername |
- Fetch the Cluster ID(
"id": "urn:vcloud:entity:vmware:capvcdCluster:<ID>
) from the above API name’s output. - Copy the entire output of the API response.
- Notedown eTag Value from API response header
- The buyer person performing cluster improve would require entry to Table 3 data. Modify Following values matching the goal TKG model. The Following desk exhibits Upgrade for TKG model 1.5.4 from v1.20.15+vmware.1 to v1.22.9+vmware.1
Control Plane Version | Old Values | New Values |
VCDMachineTemplate | ||
VCDMachineTemplate.spec.template.spec.template | Ubuntu 20.04 and Kubernetes v1.20.15+vmware.1 | Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1 |
KubeadmControlPlane | ||
KubeadmControlPlane.spec.model | v1.20.15+vmware.1 | v1.22.9+vmware.1 |
KubeadmControlPlane.spec.kubeadmConfigSpec.dns | imageTag: v1.7.0_vmware.15 | v1.8.4_vmware.9 |
KubeadmControlPlane.spec.kubeadmConfigSpec.etcd | v3.4.13_vmware.23 | v3.5.4_vmware.2 |
KubeadmControlPlane.spec.kubeadmConfigSpec.imageRepository | imageRepository: initiatives.registry.vmware.com/tkg | imageRepository: initiatives.registry.vmware.com/tkg |
Worker Node Version | ||
VCDMachineTemplate | ||
VCDMachineTemplate.spec.template.spec.template | Ubuntu 20.04 and Kubernetes v1.20.15+vmware.1 | Ubuntu 20.04 and Kubernetes v1.22.9+vmware.1 |
VCDMachineTemplate.spec.template.spec | ||
MachineDeployment | ||
MachineDeployment.spec.model | v1.20.15+vmware.1 | v1.22.9+vmware.1 |
- While performing the
PUT
API name, guarantee to incorporate fetched eTag worth as If-Match
PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET} headers: Accept: software/json; worth=37.0 Authorization: Bearer Token &lt;token> If–Match: &lt; eTag worth from earlier GET name> BODY: Copy complete physique from the earlier GET name, modify capiyaml values as described in above step to modify capiyaml. |
Delete a Cluster
GET https://{{vcd}}/cloudapi/1.0.0/entities/sorts/vmware/capvcdCluster/1?filter=title==clustername |
- Fetch the Cluster ID(
"id": "urn:vcloud:entity:vmware:capvcdCluster:<ID>
) from the above API name’s output. - Copy the entire output of the API response.
- Notedown eTag Value from API response header
- Add or modify the next fields to delete or forcefully delete the cluster below entity.spec.vcdke:
- “markForDelete”: true, –> Set the worth to true to delete the cluster
- “forceDelete”: true, –> Set this worth to true for Forceful deletion of a cluster
“org”: {
“title”: “acme”,
“id”: “urn:vcloud:org:cd11f6fd-67ba-40e5-853f-c17861120184”
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
PUT https://{{vcd}}/cloudapi/1.0.0/entities/{cluster-id from the GET API response} { “entityType”: “urn:vcloud:sort:vmware:capvcdCluster:1.1.0”, “title”: “demo”, “externalId”: null, “entity”: { “variety”: “CAPVCDCluster”, “spec”: { “vcdKe”: { “isVCDKECluster”: true, —Add or modify the this discipline to delete the cluster “markForDelete”: true, — Add or modify the this discipline to drive delete the cluster “driveDelete”: false, “autoRepairOnErrors”: true }, “capiYaml”: “<Your capYaml payload generated from Step 5> }, . . . . #Other payload from the GET API response . . .
“org“: { “title“: “acme“, “id“: “urn:vcloud:org:cd11f6fd–67ba–40e5–853f–c17861120184“ } } |
Recommendation for API Usage throughout automation
- DO NOT hardcode API urls with RDE variations. ALWAYS parameterize RDE variations. For instance:
POST https://{{vcd}}/cloudapi/1.0.0/entityTypes/urn:vcloud:type:vmware:capvcdCluster:1.1.0
Ensure to declare 1.1.0
as a variable. This will guarantee straightforward API shopper upgrades to future variations of CSE.
- Ensure the API shopper code ignores any unknown/extra properties whereas unmarshaling the API response
#In the longer term, subsequent model of capvcdCluster 1.2.0 could add extra properties (“add-ons”) to the payload.
# The previous API shopper code should guarantee it doesn’t break on seeing newer properties sooner or later payloads.
{
standing: {
kubernetesVersion: 1.20.8,
nodePools: {},
add-ons: {} // new property sooner or later model
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
#For instance, capvcdCluster 1.1.0 API payload appears to be like like beneath { standing: { kubernetesVersion: 1.20.8, nodePools: {} } } #In the longer term, subsequent model of capvcdCluster 1.2.0 could add extra properties (“add-ons”) to the payload. # The previous API shopper code should guarantee it doesn’t break on seeing newer properties sooner or later payloads. { standing: { kubernetesVersion: 1.20.8, nodePools: {}, add–ons: {} // new property sooner or later model } } |
Summary
To summarize, we checked out CRUD operations for a Tanzu Kubernetes clusters on VMware Cloud Director platform utilizing VMware Cloud Director supported APIs. Please be at liberty to checkout different assets for Container Service Extension as follows: