NFS node management and PV creation

 

Advantages of using NFS in Kubernetes

 

Using existing storage. You can mount an existing data volume that you use locally or in another remote location.

Data sharing. Due to the fact that NFS volumes are permanent, they can be used to exchange data between containers, whether in the same or different PODs.

Even if the NFS server fails, the PODs will automatically restart until the NFS server recovers, after which they will continue to operate. In doing so, we still make backups of the data

It is easy to copy NFS server data to another NFS server and switch it in case of critical failure.

When the Kubernetes application is finished, Kubernetes returns the permanent volume and its request to the pool. The data from the application remain on the volume. It can be cleared manually by entering the NFS host virtual machine and deleting files.

CSE can automatically add NFS nodes to the Kubernetes configuration when creating a new cluster. Cluster administrators can use NFS nodes to implement static persistent volumes, which, in turn, allows to deploy applications with preserved state.

Presistent Volume

Static presistent volumes are preliminarily provided by the cluster administrator. They contain information about the real storage that is available for use by cluster users. They exist in the Kubernetes API and are available for consumption. Users can allocate a static presistent volume by creating a permanent volume statement that requires the same or less memory. CSE supports static volumes placed in NFS.

NFS Volume Architecture

An NFS volume allows you to mount an existing NFS share (network file system) into one or more pods. Removing pods saves the contents of the NFS volume and simply disables the volume. This means that the NFS volume can be pre-filled with data, and this data can be "transferred" between the pods. NFS can be mounted by multiple authors at the same time.

To use NFS volumes, we need our own NFS server with exported shared resources. CSE provides commands to add pre-configured NFS servers to any given cluster. The following diagram shows the implementation architecture.

Creating a cluster with an attached NFS node

NFS administration starts with the creation of a cluster where we can prepare the NFS node. Let us create a Ubuntu based cluster using the vcd cse cluster create command shown below. The option --enable-nfs signals to CSE that it should include the NFS node. The --ssh-key option indicates that nodes should use SSH user keys. An SSH key is needed to log on to the NFS host and configure the shared resources.

# Login.
vcd login cse.acme.com devops imanadmin  --password='T0pS3cr3t'
# Create cluster with 2 worker nodes and NFS server node.
vcd cse cluster create mycluster --nodes 2 \
 --network mynetwork -t ubuntu-16.04_k8-1.13_weave-2.3.0 -r 1 --enable-nfs \
 --ssh-key ~/.ssh/id_rsa.pub

This operation will take several minutes until the CSE extension creates the Kubernetes vApp application.

You can also add a node to an existing cluster using a command similar to the following.

# Add an NFS server (node of type NFS).
vcd cse node create mycluster --nodes 1 --network mynetwork \
  -t ubuntu-16.04_k8-1.13_weave-2.3.0 -r 1 --type nfsd

Configuring NFS Shared Resources

The next step is to create NFS shared resources that can be distributed across permanent volume resources. First, we need to add an independent disk to the NFS node to create a file system that we can export.

# List the VMs in the vApp to find the NFS node. Look for a VM name that
# starts with 'nfsd-', e.g., 'nfsd-ljsn'. Note the VM name and IP address.
vcd vapp info mycluster
# Create a 100Gb independent disk and attach to the NFS VM.
vcd disk create nfs-shares-1 100g --description 'Kubernetes NFS shares'
vcd vapp attach mycluster nfsd-ljsn nfs-shares-1

Next, ssh is in the NFS host itself.

ssh root@10.150.200.22
... (root prompt appears) ...

Partition and format the new drive. In Ubuntu the drive will be displayed as / dev / sdb. The procedure below is an example; you can use any method you like.

root@nfsd-ljsn:~# parted /dev/sdb
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) unit GB
(parted) mkpart primary 0 100
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
 1      0.00GB  100GB  100GB               primary

(parted) quit
root@nfsd-ljsn:~# mkfs -t ext4 /dev/sdb1
Creating filesystem with 24413696 4k blocks and 6111232 inodes
Filesystem UUID: 8622c0f5-4044-4ebf-95a5-0372256b34f0
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Create a mount point, add a new partition to the file system list and mount it.

mkdir /export
echo '/dev/sdb1  /export   ext4  defaults   0 0' >> /etc/fstab
mount -a

At this stage you should have a working file system in the / export directory. The last step is to create directories and share them via NFS.

cd /export
mkdir vol1 vol2 vol3 vol4 vol5
vi /etc/exports
...Add following at end of file...
/export/vol1 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol2 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol3 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol4 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol5 *(rw,sync,no_root_squash,no_subtree_check)
...Save and quit
exportfs -r

 

The preparation of shared file systems is now complete. You can exit the NFS node.

Using permanent volumes of Kubernetes

In order to use the shared resources, we need to create permanent, voluminous resources. To start with, let's take kubeconfig so that we can access the new Kubernetes cluster.

vcd cse cluster config mycluster > mycluster.cfg
export KUBECONFIG=$PWD/mycluster.cfg

Create a permanent volume resource for the shared resource in / export / vol1. The path must match the export name, otherwise you will get errors when Kubernetes tries to mount an NFS share in the module.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-vol1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # Same IP as the NFS host we ssh'ed to earlier.
    server: 10.150.200.22
    path: "/export/vol1"
EOF

Then request a constant volume according to the size of the constant volume.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi
EOF

Now run an application that uses a constant volume requirement. In this example, it runs a busybox in several modules that are written to shared storage.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ReplicationController
metadata:
  name: nfs-busybox
spec:
  replicas: 2
  selector:
    name: nfs-busybox
  template:
    metadata:
      labels:
        name: nfs-busybox
    spec:
      containers:
      - image: busybox
        command:
          - sh
          - -c
          - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
        imagePullPolicy: IfNotPresent
        name: busybox
        volumeMounts:
          # name must match the volume name below
          - name: nfs
            mountPath: "/mnt"
      volumes:
        - name: nfs
          persistentVolumeClaim:
            claimName: nfs-pvc
EOF

Performance Check

We can check the status of the deployed application and its storage. First, make sure that all resources are in an operational state.

kubectl get pv
kubectl get pvc
kubectl get rc
kubectl get pods

Now we can review the state of the repository using the kubectl exec command to run the command on one of the pods. (Replace the correct pod name from your kubectl get pods output).

$ kubectl exec -it nfs-busybox-gcnht cat /mnt/index.html
Fri Dec 28 00:16:08 UTC 2018
nfs-busybox-gcnht

 

If you executed the previous command several times, you will see the date and host change when the modules write to the index.html file.

 

Have you tried Cloud4U cloud services? Not yet?

Go to the Main Website

Try for free

  • 2 Users Found This Useful
Was this answer helpful?

Related Articles

Preparation for use, client installation

  In order to create / edit / use Kubernets in vCloud Director, it is necessary to use the CSE...

Managing a Kubernetes cluster

Overview Learn the basic commands that allow users to create, manage, and delete Kubernetes...

Adding a network load balancer

Данная инструкция описывает как настроить баллансировщик для Kubernetes Cluster созданного c...

Deployment kubernetes cluster in vCloud

To deploy and manage kubernetes in vCloud, you can use the container service extensions tool. An...

Creating a Kubernetes cluster via vCloud Director

To create the cluster you need to log in to a personal tenant with a user having administrator...