How to Install and configure vSphere CSI Driver on OpenShift 4.x

This post was originally published on this site

Introduction

In this post I am going to install the vSphere CSI Driver version 2.0 with OpenShift 4.x, in my demo environment I’m connecting to a VMware Cloud on AWS SDDC and vCenter, however the steps are the same for an on-prem deployment.

I have updated the available configuration files available from Red Hat for installing the CSI Driver in OpenShift, to make them compatible with the latest CSI Driver. You can find these in my GitHub repo;

- Pre-Reqs
- - vCenter Server Role
- - Download the deployment files
- - Create the vSphere CSI secret in OpenShift
- - Create Roles, ServiceAccount and ClusterRoleBinding for vSphere CSI Driver
- Installation
- - Install vSphere CSI driver
- - Verify Deployment
- Create a persistent volume claim
- Troubleshooting

In your environment, cluster VMs will need “disk.enableUUID” and VM hardware version 15 or higher.

Pre-Reqs
vCenter Server Role

In my environment I will use the default administrator account, however in production environments I recommend you follow a strict RBAC procedure and configure the necessary roles and use a dedicated account for the CSI driver to connect to your vCenter.

To make life easier I have created a PowerCLI script to create the necessary roles in vCenter based on the vSphere CSI documentation;

Download the deployment files

Run the following;

git clone https://github.com/saintdle/vSphere-CSI-Driver-2.0-OpenShift-4.git

Create the vSphere CSI Secret in OpenShift

Edit the file “csi-vsphere-for-OCP.conf” with your vCenter infrastructure details;

[Global]
 
# run the following on your OCP cluster to get the ID
# oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"n"}'
cluster-id = "OCP_CLUSTER_ID"

[VirtualCenter "VC_FQDN"]
insecure-flag = "true"
user = "USER"
password = "PASSWORD"
port = "443"
datacenters = "VC_DATACENTER"

Create the secret;

oc create secret generic vsphere-config-secret --from-file=csi-vsphere-for-OCP.conf --namespace=kube-system

oc get secret vsphere-config-secret --namespace=kube-system

This configuration is for block volumes, it is also supported to configure access to VSAN File volumes, and you can see an example of the configuration here;

Remove your “csi-vsphere-for-OCP.conf” once the secret is created, as it contains your password in clear text for vCenter.

Create Roles, ServiceAccount and ClusterRoleBinding for vSphere CSI Driver

oc apply -f vsphere-csi-controller-rbac-for-OCP.yaml
Installation
Install vSphere CSI driver

The driver is made up of the following components

  • CSI Controller runs as a Kubernetes deployment, with a replica count of 1.
  • For version v2.0.0, the vsphere-csi-controller Pod consists of 6 containers
    • CSI controller, External Provisioner, External Attacher, External Resizer, Liveness probe and vSphere Syncer.
oc apply -f vsphere-csi-controller-deployment-for-OCP.yaml

A CSI node Daemonset is also deployed, running on every node in the cluster.

oc apply -f vsphere-csi-node-ds-for-OCP.yaml
Verify the deployment

You can verify the deployment with the two below commands

oc get deployment --namespace=kube-system

oc get CSINode
Creating a Storage Class that uses the CSI-Driver

Create a storage class to test the deployment. As I am using VMC as my test environment, I must use some additional optional parameters to ensure that I use the correct VSAN datastore. You can visit the references below for more information.

To get my datastore URL I need to reference, I will use PowerCLI

get-datastore work* | Select -ExpandProperty ExtensionData | select -ExpandProperty Info

I’m going to create my StorageClass on the fly, but you can find my example YAMLs here;

cat << EOF | oc apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: csi-sc-vmc
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: csi.vsphere.vmware.com
parameters:
  StoragePolicyName: "vSAN Default Storage Policy"
  datastoreURL: "ds:///vmfs/volumes/vsan:3672d400f5fa4515-8a8cb78f6b972f74/"
EOF
Create a Persistent Volume Claim

Finally, we are going to create a PVC. You can find my example PVC files at the same link above.

cat << EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-openshift-vmc-block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-sc-vmc
EOF

You can see the PVC created under my cluster > Monitor Tab > Cloud Native Storage in vCenter.

Troubleshooting

For troubleshooting, you need to be aware of the four main containers that run in the vSphere CSI Controller pod and you should investigate the logs from these when you run into issues;

  • CSI-Attacher
  • CSI-Provisoner
  • vSphere-CSI-Controller
  • vSphere-Syncer

Below I have uploaded some of the logs from a successful setup and creation of a persistent volume.

Resources

Regards

The post How to Install and configure vSphere CSI Driver on OpenShift 4.x appeared first on @Saintdle.