Read Write Once Persistent Storage for OpenShift Origin using Gluster

In this blog we shall learn about:

  1. Containers and Persistent Storage
  2. About OpenShift Origin
  3. Terminology and background
  4. Our approach
  5. Setting up
    • Gluster and iSCSI target
    • iSCSI Initiator
    • Origin master and nodes
  6. Conclusion
  7. References

 

Containers and Persistent Storage

As we all know containers are stateless entities which are used to deploy  applications and hence need persistent storage to store  application data for availability across container incarnations.

Persistent storage in containers are of two types, shared and non-shared.
Shared storage:
Consider this as a volume/store where multiple Containers perform both read and write operations on the same data. Useful for applications like web servers that need to serve the same data from multiple container instances.

Non Shared/Read Write Once Storage:
Only a single container can perform write operations to this store at a given time.

This blog will explain about Non Shared Storage for OpenShift Origin using gluster.

 

About OpenShift Origin

OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment.

A few interesting features include Multi-tenancy support, Web console, Centralized administration, capability to automatically deploy applications on a new commit in source repo and etc..

Difference_dock_openshift

Read More @ origin github

 

Terminology and background

Refer to ‘Terminology and background’ section from our previous post

 

Our Approach

With all the background discussed above now I shall jump into actual essence of this blog and explain how we can expose the file in gluster volume as a read write once persistent storage in openshift pods.

The Current version of Kubernetes v1.2.x  which origin uses in my case, does not provide/understand multipathing, this patch got merged in v1.3.alpha3 release

Hence, In this blog I’m going with multipath disabled, once ansible playbook is upgraded to latest origin which use k8s v1.3.0, I shall update the blog to have multipath changes.

In our approach all the OpenShift Origin nodes initiate the iSCSI session, attaches iSCSI target as block device and serve it to pod where the application is running and requires persistent storage.

OpenShiftOrigin

Now without any delay let me walk through the setup details…

 

Setting Up

You need 6 nodes for setting this up, 3 acts as gluster nodes where the iSCSI target is served from and 1 as OpenShift Origin master and other 2 as the iSCSI initiators which also acts as Origin nodes.

  • We create a gluster replica 3 volume using the 3 nodes {Node1, Node2 and Node3}.
  • Define iSCSI target using the same nodes, expose ‘LUN’ from each of them.
  • Use Node 4 and Node 5 as as iSCSI initiators, by logging-in to the iSCSI target session created above (No multipathing)
  • Setup OpenShift Origin cluster by using {Node4, Node5 and Node6}, Node 6 is master and other 2 are slave nodes
  • From Node 6 create the pod and examine the iSCSI target device mount inside it.

Gluster and iSCSI target Setup

Refer to ‘Gluster and iSCSI target Setup’ section from our previous post

iSCSI initiator Setup

Refer to ‘iSCSI initiator Setup’ section from our previous post

OpenShift Origin Master and Nodes Setup

Master -> Node6
Slaves -> Node5 & Node4

Clone the openshift ansible repo
[root@Node6 ~]# git clone https://github.com/openshift/openshift-ansible.git

Install ansible on all the nodes including master
# dnf install -y ansible pyOpenSSL python-cryptography

Configure nodes in inventory file,
all you need to do is replacehost addresses, highlighted in bold
[root@Node6 ~]# cat > /etc/ansible/hosts
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_sudo must be set to true
#ansible_sudo=true

deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true',
# 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
Node6

# host group for nodes, includes region info
[nodes]
Node6 openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
Node5 openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
Node4 openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
^C

Make nodes password less authorise logins, on all machines

Generate ssh key 
# ssh-keygen

Share ssh key with all the nodes, to do so, execute below on master,
$HOSTS being all the addresses/ip including master's, one at a time
[root@Node6 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub $HOSTS

Just matter of precaution on all the hosts, disable selinux
# setenforce 0

Install some package dependencies, ignored by playbook
[root@Node6 ~]# ansible all -m shell -a "dnf install python2-dnf -y" 
[root@Node6 ~]# ansible all -m shell -a "dnf install python-dbus -y"
[root@Node6 ~]# ansible all -m shell -a "dnf install libsemanage-python -y"

Lets, execute the playbook
[root@Node6 ~]# cd $PATH/openshift-ansible
[root@Node6 openshift-ansible]# ansible-playbook playbooks/byo/config.yml
It takes ~40 minutes to finish this, at least that's what it took me. 

Check all nodes are ready
[root@Node6 ~]# oc get nodes
NAME STATUS AGE
Node4 Ready 1h
Node5 Ready 1h
Node6 Ready,SchedulingDisabled 1h

Check for pods
[root@Node6 ~]# oc get pods

 

login to the origin web console https://Node6:8443
Credentials: user->admin, passwd->admin

Screenshot from 2016-08-16 16-20-01.png

create a New project “say blockstore-gluster”

Attach4.png

 

Switch to 'blockstore-gluster' project
[root@Node6 ~]# oc project blockstore-gluster
Now using project "blockstore-gluster" on server "https://Node6:8443".

Write a manifest/artifact for the pod
[root@Node6 ~]# cat > iscsi-pod.json
{
   "apiVersion": "v1",
   "kind": "Pod",
   "metadata": {
      "name": "glusterpod"
   },
   "spec": {
      "containers": [
         {
            "name": "iscsi-rw",
            "image": "fedora",
            "volumeMounts": [
               {
                  "mountPath": "/mnt/gluster-store",
                  "name": "iscsi-rw"
               }
            ],
            "command": [ "sleep", " 100000" ]
         }
      ],
      "volumes": [
         {
            "name": "iscsi-rw",
            "iscsi": {
               "targetPortal": "Node1:3260",
               "iqn": "iqn.2016-06.org.gluster:Node1",
               "lun": 0,
               "fsType": "xfs",
               "readOnly": false
            }
         }
      ]
   } 
}
^C

Create the pod
[root@Node6 ~]# oc create -f ~/iscsi-pod.json 
pod "glusterpod" created

Get the pod info
[root@Node6 ~]# oc get pods
NAME READY STATUS RESTARTS AGE
glusterpod 0/1 ContainerCreating 0 20s

Check events
[root@Node6 ~]# oc get events -w
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
2016-08-16 16:16:10 +0530 IST 2016-08-16 16:16:10 +0530 IST 1 glusterpod Pod Normal Scheduled {default-scheduler } Successfully assigned glusterpod to dhcp43-73.lab.eng.blr.redhat.com
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
2016-08-16 16:16:14 +0530 IST 2016-08-16 16:16:14 +0530 IST 1 glusterpod Pod spec.containers{iscsi-rw} Normal Pulling {kubelet Node5} pulling image "fedora"
2016-08-16 16:17:17 +0530 IST 2016-08-16 16:17:17 +0530 IST 1 glusterpod Pod spec.containers{iscsi-rw} Normal Pulled {kubelet Node5} Successfully pulled image "fedora"
2016-08-16 16:17:18 +0530 IST 2016-08-16 16:17:18 +0530 IST 1 glusterpod Pod spec.containers{iscsi-rw} Normal Created {kubelet Node5} Created container with docker id 0208911923f1
2016-08-16 16:17:18 +0530 IST 2016-08-16 16:17:18 +0530 IST 1 glusterpod Pod spec.containers{iscsi-rw} Normal Started {kubelet Node5} Started container with docker id 0208911923f1

[root@Node6 ~]# oc get pods
NAME READY STATUS RESTARTS AGE
glusterpod 1/1 Running 0 1m

Get into the pod
[root@Node6 ~]# oc exec -it glusterpod bash

[root@glusterpod /]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
[...]
/dev/sda xfs 8G 33M 8G 1% /mnt/gluster-store
/dev/mapper/fedora_dhcp42--82-root xfs 15G 1.8G 14G 12% /etc/hosts
[...]

[root@glusterpod /]# cd /mnt/gluster-store/
[root@glusterpod gluster-store]# ls
1 10 2 3 4 5 6 7 8 9 


 

Origin Web console with pod running:

Attach1.png

Details of pod:

Attach2.png

That’s cool Isn’t it ?

 

Conclusion

This just showcases how Gluster can be used as a distributed block store with OpenShift Origin cluster. More details about multipathing, integration with Mesos etc. will come by in further posts.

 

References

https://docs.openshift.org/latest/welcome/index.html

https://github.com/openshift/openshift-ansible/

http://kubernetes.io/

http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services

http://rootfs.github.io/iSCSI-Kubernetes/

http://blog.gluster.org/2016/04/using-lio-with-gluster/

https://docs.docker.com/engine/tutorials/dockervolumes/http://scst.sourceforge.net/scstvslio.html

http://events.linuxfoundation.org/sites/events/files/slides/tcmu-bobw_0.pdf

https://www.kernel.org/doc/Documentation/target/tcmu-design.txt

https://lwn.net/Articles/424004/

http://www.gluster.org/community/documentation/index.php/GlusterFS_Documentation