Gluster as Block Storage with qemu-tcmu

In this blog we shall see

  1. Terminology and background
  2. Our approach
  3. Setting up
    • Gluster Setup
    • Tcmu-Runner
    • Qemu and Target Setup
    • iSCSI Initiator
  4. Conclusion
  5. Similar Topics

Terminology and background

Gluster is a well known scale-out distributed storage system, flexible in its design and easy to use. One of its key goals is to provide high availability of data.  Despite its distributed nature, Gluster is very easy to set up and use. Addition and removal of storage servers from a Gluster cluster is very easy. These capabilities along with other data services that Gluster provides makes it a very nice software defined storage platform.

We can access glusterfs via FUSE module. However to perform a single filesystem operation various context switches are required which leads to performance issues. Libgfapi is a userspace library for accessing data in Glusterfs. It can perform IO on gluster volumes without the FUSE module, kernel VFS layer and hence requires no context switches. It exposes a filesystem like API for accessing gluster volumes. Samba, NFS-Ganesha, QEMU and now the tcmu-runner all use libgfapi to integrate with Glusterfs.

A unique distributed storage solution build on traditional filesystems

The SCSI subsystem uses a sort of client-server model.  The Client/Initiator request IO happen through target which is a storage device. The SCSI target subsystem enables a computer node to behave as a SCSI storage device, responding to storage requests by other SCSI initiator nodes.

In simple terms SCSI is a set of standards for physically connecting and transferring data between computers and peripheral devices.

The most common implementation of the SCSI target subsystem is an iSCSIserver, iSCSI transports block level data between the iSCSI initiator and the target which resides on the actual storage device. iSCSi protocol wraps up the SCSI commands and sends it over TCP/IP layer. Up on receiving the packets at the other end it disassembles them to form the same SCSI commands, hence on the OS’es it seen as local SCSI device.

In other words iSCSI is SCSI over TCP/IP.

The LIO project began with the iSCSI design as its core objective, and created a generic SCSI target subsystem to support iSCSI. LIO is the SCSI target in the Linux kernel. It is entirely kernel code, and allows exported SCSI logical units (LUNs) to be backed by regular files or block devices.

LIO is Linux IO target, is an implementation of iSCSI target.

TCM is another name for LIO, an in-kernel iSCSI target (server). As we know existing TCM targets run in the kernel. TCMU (TCM in Userspace) allows userspace programs to be written which act as iSCSI targets. These enables wider variety of backstores without kernel code. Hence the TCMU userspace-passthrough backstore allows a userspace process to handle requests to a LUN. TCMU utilizes the traditional UIO subsystem, which is designed to allow device driver development in userspace.

One such backstore with best clustered network storage capabilities is GlusterFS

Tcmu-runner utilizes the TCMU framework handling the messy details of the TCMU interface (Thanks to Andy Grover), Tcmu-runner itself has a glusterfs handler that can interact with the backed file in gluster volume over libgfapi interface and can show it as a target (over network).

Some responsibilities for tcmu-runner include

  1. Discovering and configuring TCMU UIO devices
  2. waiting for the events on the device and
  3. managing the command ring buffers

TargetCli is the general management platform for the LIO/TCM/TCMU. TargetCli with its shell interface is used to configure LIO.

Think it like a shell which makes life easy in configuring LIO core

QEMU (Quick Emulator) is a generic and open source machine emulator and virtualizer. It is free and open source tool that allows users to create and manage Virtual machines inside the host operating system. The resources of the host operating system, such as Hard drive, RAM, Processor, will be divided and shared by the guest operating systems(Virtual machines).

When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.

When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux.

QEMU can access the disk/drive/VMimage files not just from local directories but also from remote locations using various protocols (iSCSI, nfs, gluster, rbd, nbd, sheepdog etc.)

In one line, QEMU is a quick emulator and virtualizer that is capable of accessing storage locally and remotely using various protocol drivers. 

Qemu-tcmu is an another utility/package from QEMU (Thanks to Fam Zheng), that uses libtcmu to create, register the protocol handlers which will help in exporting LUNs. The best part about qemu-tcmu is being able to export any format/protocol that QEMU supports, for local or remote access. Examples being gluster, rbd, nbd, sheepdog, nfs, qed, qcow, qcow2, vdi vmdk, vhdx and few other.

New4.png

Our Approach

With all the background discussed above now let’s jump into actual essence of this blog and explain how we can expose the file in gluster volume as a block device using qemu-tcmu.

  1. Start glusterd and tcmu-runner, create a gluster volume
  2. Create a file in the gluster volume
  3. Register and start the gluster protocol handler with tcmu using qemu-tcmu.
  4. Create the iSCSI target and export the LUN
  5. From the client side discover and login to the target portal, play with the block device


Setting Up

You need 2 nodes for setting this up, 1 acts as gluster node where the iSCSI target is served from and other machine as a iSCSI Initiator/Client where we play with block device.

I’m using fedora24 on both the nodes.


Gluster Setup

For simplicity of this blog I’m using only one node gluster setup which is 1×1 plane distribute volume.

# dnf -y install git
# git clone https://github.com/gluster/glusterfs.git
# cd glusterfs
As we have noticed a critical bug in latest master.
# git checkout -b tag-3.8.4 v3.8.4 
# dnf -y install gcc automake autoconf libtool flex
         bison openssl-devel libxml2-devel         \
         python-devel libaio-devel sqlite-devel    \
         libibverbs-devel librdmacm-devel          \
         readline-devel lvm2-devel glib2-devel     \
         userspace-rcu-devel libcmocka-devel       \
         libacl-devel sqlite-devel redhat-rpm-config
# ./autogen.sh && ./configure && make -j install

# systemctl start glusterd

# gluster vol status
No volumes present

# gluster vol create block-store NODE1:/brick force
volume create: block-store: success: please start the volume ...

# gluster vol start block-store
volume start: block-store: success

# gluster vol status
Status of volume: block-store
Gluster process     TCP Port RDMA Port Online Pid
-----------------------------------------------------
Brick Node1:/brick  49152    0         Y      13372
 
Task Status of Volume block-store
-----------------------------------------------------
There are no active volume tasks 

Tcmu-Runner Setup

# git clone https://github.com/open-iscsi/tcmu-runner.git
# cd tcmu-runner
# dnf -y install targetcli cmake "*kmod*" libnl3* zlib-devel
For libgfapi.so* gluster libraries 
# export LD_LIBRARY_PATH=/usr/local/lib/
# cmake -DSUPPORT_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX=/usr 
# make -j 
# make -j install

Run tcmu-runner
# systemctl start tcmu-runner

 Qemu Setup

# git clone https://github.com/qemu/qemu.git
# cd qemu
copy and apply the qemu-tcmu RFC patch from 
https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg00711.html

# dnf -y install libiscsi-devel pixman-devel

For libgfapi.so* and libtcmu.so*
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64
# ./configure --target-list=x86_64-softmmu \
              --enable-glusterfs --enable-libiscsi \
              --enable-tcmu
# make -j 
# make -j install

Target Setup

Create a file in the gluster volume 
# qemu-img create -f qcow2 gluster://NODE1/block-store/storage.qcow2 10G
Formatting 'gluster://NODE1/block-store/storage.qcow2', 
fmt=qcow2 size=10737418240 encryption=off 
cluster_size=65536 lazy_refcounts=off refcount_bits=16

Check for the details/info
# qemu-img info gluster://NODE1/block-store/storage.qcow2
image: gluster://NODE1/block-store/storage.qcow2
file format: qcow2
virtual size: 10G (10737418240 bytes)
disk size: 193K
cluster_size: 65536
Format specific information:
 compat: 1.1
 lazy refcounts: false
 refcount bits: 16
 corrupt: false

Register and start the gluster protocol handler
# qemu-tcmu gluster://NODE1/block-store/storage.qcow2 &
[scsi/tcmu.c:0298] tcmu start
[scsi/tcmu.c:0314] register

Should be able to see something like
# targetcli ls | grep user:qemu
| o- user:qemu ................... [Storage Objects: 0]

Define/set IQN
# IQN=iqn.2016-11.org.gluster:qemu-tcmu-glfs

Create a target
# targetcli /iscsi create $IQN
Created target iqn.2016-11.org.gluster:qemu-tcmu-glfs.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

Share a qemu-tcmu backed LUN without any auth checks
# targetcli /iscsi/$IQN/tpg1 set attribute \
                             generate_node_acls=1 \
                             demo_mode_write_protect=0
Parameter generate_node_acls is now '1'.
Parameter demo_mode_write_protect is now '0'.

Create the backend with qemu-tcmu storage module
# targetcli /backstores/user:qemu create QemuLUN 10G @drive
Created user-backed storage object QemuLUN size 10737418240.

Set/Export LUN
# targetcli /iscsi/$IQN/tpg1/luns create /backstores/user:qemu/QemuLUN
Created LUN 0.

Check the configuration
# targetcli ls
o-/ ...................................................... [...]
 o- backstores ........................................... [...]
 | o- block ............................... [Storage Objects: 0]
 | o- fileio .............................. [Storage Objects: 0]
 | o- pscsi ............................... [Storage Objects: 0]
 | o- ramdisk ............................. [Storage Objects: 0]
 | o- user:glfs ........................... [Storage Objects: 0]
 | o- user:qcow ........................... [Storage Objects: 0]
 | o- user:qemu ........................... [Storage Objects: 1]
 |   o- QemuLUN ................... [@drive (10.0GiB) activated]
 o- iscsi ......................................... [Targets: 1]
 | o- iqn.2016-11.org.gluster:qemu-tcmu-glfs ......... [TPGs: 1]
 |   o- tpg1 ............................... [gen-acls, no-auth]
 |     o- acls ....................................... [ACLs: 0]
 |     o- luns ....................................... [LUNs: 1]
 |     | o- lun0 ................................ [user/QemuLUN]
 |     o- portals ................................. [Portals: 1]
 |       o- 0.0.0.0:3260 .................................. [OK]
 o- loopback ...................................... [Targets: 0]
 o- vhost ......................................... [Targets: 0]

All we have done till now was on server side i.e. Node1.

Initiator Setup

On the Client side (Node 2)

# dnf install iscsi-initiator-utils sg3_utils

Check existing block devices
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 477G 0 disk 
├─sda1 8:1 0 4.7G 0 part /boot
└─sda2 8:2 0 472.3G 0 part 
 ├─fedora-root 253:0 0 328.8G 0 lvm /
 ├─fedora-swap 253:1 0 3.8G 0 lvm [SWAP]
 └─fedora-home 253:2 0 139.7G 0 lvm /home

# systemctl start iscsid 

Discovery and login to target
# iscsiadm -m discovery -t st -p NODE1 -l 
NODE1:3260,1 iqn.2016-11.org.gluster:qemu-tcmu-glfs
Logging in to [iface: default, target: ..., portal: NODE1,3260] (multiple)
Login to [iface: default, target: ..., portal: NODE1,3260] successful.

Trouble shoot tip!
If you see something like
# iscsiadm -m discovery -t st -p NODE1 -l
iscsiadm: cannot make connection to NODE1: No route to host
iscsiadm: cannot make connection to NODE1: No route to host
[...]
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: Could not perform SendTargets discovery: connection failure
Then flush the IP tables with "iptables -F" command on server NODE

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 477G 0 disk 
├─sda1 8:1 0 4.7G 0 part /boot
└─sda2 8:2 0 472.3G 0 part 
 ├─fedora-root 253:0 0 328.8G 0 lvm /
 ├─fedora-swap 253:1 0 3.8G 0 lvm [SWAP]
 └─fedora-home 253:2 0 139.7G 0 lvm /home
sdb 8:16 0 10G 0 disk 

Boom! got sdb with 10G space 
Check sdb is the right one,
you should be able to see 'vendor specific' as 'qemu/@drive'
# sg_inq -i /dev/sdb 
VPD INQUIRY: Device Identification page 
 Designation descriptor number 1, descriptor length: 49 
 designator_type: T10 vendor identification, code_set: ASCII 
 associated with the addressed logical unit 
 vendor id: LIO-ORG 
 vendor specific: fe14a7d8-ca4d-4fa0-9646-cceb4961fd92 
 Designation descriptor number 2, descriptor length: 20 
 designator_type: NAA, code_set: Binary 
 associated with the addressed logical unit 
 NAA 6, IEEE Company_id: 0x1405 
 Vendor Specific Identifier: 0xfe14a7d8c 
 Vendor Specific Identifier Extension: 0xa4d4fa09646cceb4 
 [0x6001405fe14a7d8ca4d4fa09646cceb4] 
 Designation descriptor number 3, descriptor length: 16 
 designator_type: vendor specific [0x0], code_set: ASCII 
 associated with the addressed logical unit 
 vendor specific: qemu/@drive 

Lets format the block device with xfs
# mkfs.xfs /dev/sdb
meta-data=/dev/sdb      isize=512   agcount=4, agsize=655360 blks
         =              sectsz=512  attr=2, projid32bit=1
         =              crc=1       finobt=1, sparse=0
data     =              bsize=4096  blocks=2621440, imaxpct=25
         =              sunit=0     swidth=0 blks
naming   =version 2     bsize=4096  ascii-ci=0 ftype=1
log      =internal log  bsize=4096  blocks=2560, version=2
         =              sectsz=512  sunit=0 blks, lazy-count=1
realtime =none          extsz=4096  blocks=0, rtextents=0

# mount /dev/sdb /mnt

# df -Th
Filesystem Type Size Used Avail Use% Mounted on
[...]
/dev/sdb xfs 10G 33M 10G 1% /mnt
[...]

# cd /mnt

# touch like.{1..10}
like.1 like.2 like.4 like.6 like.8
like.10 like.3 like.5 like.7 like.9

Wow! this is cool isn’t it ?

Qemu-Tcmu is fresh cake still in POC, with the following work on going:

1. For now we have qemu-tcmu process running per target, i.e. for 10 targets we should run 10 “qemu-tcmu … &” processes. This is more resource consumption at the cost of  better isolation. But don’t worry Fam is working on this, he will shrink up these to one single process.

Hopefully we should be able to get something like

# qemu-tcmu -drive gluster://..../file1 \
            -drive gluster://..../file2 \
            -drive gluster://..../file3 \
            ...

so you can dynamically add the target just like any other extra disk in a VM.

2. How do you create second target ?
well we have “-x, –handler-name=NAME” for that 🙂

Example:

# qemu-tcmu -x qemu2  gluster://NODE1/block-store/storage2.qcow2 &
so now we have 2 handlers, one for each target
# targetcli ls | grep user:qemu
| o- user:qemu ........................... [Storage Objects: 1]
| o- user:qemu2 .......................... [Storage Objects: 0]

Though @drive is very generic, this is how @drive will be understandable by qemu-tcmu for now.

3. Work  related to Snapshots, backup, mirror and etc is in progress, which can be expected in the near future releases.

Conclusion

With this approach of exporting file in the gluster volume as a iSCSI target, we achieve easy and free snapshots for block storage’s. We see more features/improvements ahead in qemu-tcmu landing very soon such as multiple targets within the same process and multiple objects within the same target by unique @drive Id and improvements in areas like snapshots/mirror/backup and etc.,

I shall keep you updated with the latest improvements in qemu-tcmu.

Similar Topics

https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/

https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/

https://pkalever.wordpress.com/2016/08/16/read-write-once-persistent-storage-for-openshift-origin-using-gluster/