Zdo13 9701 01
Zdo13 9701 01
Zdo13 9701 01
Version 1.0
June 2013
Table of Contents
About this document............................................................................................................................. 4 Overview.................................................................................................................................................. 5 So how was it built?................................................................................................................................ 6 Install & Configure GlusterFS............................................................................................................... 7 Step 01 : Install Fedora on XFVA001............................................................................................... 7 Step 02 : Clone the System............................................................................................................... 8 Step 03 : Prepare xfva001................................................................................................................ 9 Step 04 : Prepare xfva002.............................................................................................................. 10 Step 05 : Install GlusterFS.............................................................................................................. 10 Step 06 : Setup Trusted Storage Pool.......................................................................................... 10 Step 07 : Create the Gluster Volume............................................................................................ 11 Step 08 : Test it !............................................................................................................................... 12 Setup the ownCloud server................................................................................................................ 13 Step 01 : Install basic OS................................................................................................................. 13 Step 02 : Install GlusterFS Client................................................................................................... 13 Step 03 : Create Mountpoint & edit fstab................................................................................... 13 Step 04 : Install OwnCloud............................................................................................................. 14 Step 05 : Test it!................................................................................................................................ 16
Overview
We used two servers (xfva001 and xfva002) who both run Fedora 18, and have the glusterfs--fuse and glusterfs-server packages installed (version 3.1.1). During tests the package glusterfs-client was needed too. On one of these servers (we'll call it the primary server) a 'replicate 2' volume was built. This volume (gv0) consists of two bricks, each of these e 'physically attached' to one of the these servers. The glusterfs volume thus created will have it's data stored on both of these bricks. In case of failure to one of the bricks or servers (or any other hardware in between) data will continue to be available to the client (the owncloud server in our example) via the other brick on the other server. All of is this is being done transparent to the client. The client will expose it's services (the ownCloud application) via a reverse proxy (pound) so end-users can make use of this service via 'the interwebs'.
Up next, boot the clone and change hostname to xfva002. Make sure the hostanmes are available in dns. See network requirement.
Create MountPoint
[root@xvfa001 henri]# mkdir -p /export/ bfva00101 [root@xvfa001 henri]# echo /dev/vdb1 /export/bfva00101 /etc/fstab xfs defaults 1 2 >>
The 'naming convention' used here is a 'brick' will be stored on a disk. This disk will be mounted on /export/bxxxxxxnn.
xxxxxx nn : 6 charactert hostname-identifier (in this case 'fva001') : disk-number on the host (in this case the first disk '01')
Create subdir
mkdir /export/bfva00101/vdb1
Then setup the peer connection between xfva001 and xfva002. From the xfva001 machine run the following command :
# gluster peer probe xfva002 Probe successful # gluster peer status Number of Peers: 1 Hostname: xfva002 Uuid: 6de2d3db-c770-4b0a-a90d-2c69fa8fc6f7 State: Peer in Cluster (Connected)
This will create a volume that will have all of it's data replicated between xfva001 and xfva002. Data stored on the 'gv0' gluster volume will be written to both 'Brick1' and 'Brick2' as shown in the picture below. (courtesy of https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/htmlsingle/Administration_Guide)
Step 08 : Test it !
After all the hard work, it would be nice to see it actually working. Seeing as we installed the 'glusterfs-client' package at the start too we can easily just create a mountpoint and moint the glustervolume there :
mkdir /mnt/tstgv0 mount -t glusterfs xfva001:/gv0 /mnt/tstgv0/ When all goed well, the 'df' command should show output resembling as below: [root@xfva002 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 491440 0 491440 0% /dev tmpfs 509716 84 509632 1% /dev/shm tmpfs 509716 3984 505732 1% /run tmpfs 509716 0 509716 0% /sys/fs/cgroup /dev/mapper/fedora-root 5585732 4076452 1218876 77% / tmpfs 509716 12 509704 1% /tmp /dev/vda1 487652 103479 358573 23% /boot /dev/vdb1 16765952 33008 16732944 1% /export/bfva00201 xfva001:/gv0 16765952 33024 16732928 1% /mnt/tstgv0
Start writing some files (lets copy /var/log/messages a 100 times like in the GlusterFS the Quitstart Example)
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/tstgv0/copy-test-$i; done
20 20 20 20
. .. copy-test-001 copy-test-002
20 20 20 20
. .. copy-test-001 copy-test-002
According to the GlusterFS documentation the '_netdev' option is essential to make sure the volume will be properly mounted in case of server reboots. However, on Ubuntu this option seems to be unimplemented and thus ignored3. See also this post at unixheaven.org : http://unix-heaven.org/glusterfs-fails-to-mount-after-reboot
2 3
To be honest, keeping a 'healthy' mix of various distros was the main reason for this :) At this point in time it was not deemed neccesary to 'fix' this. Just make sure te manually mount -a after owncloud reboots :)
Make sure to set the ownership of the intended owncoud-datastore (in our case /mnt/gv0) in such a way the web-server processes can do what they need to do. I've settled for a plain and simple :
chown www-data:www-data /mnt/gv0
Then head over to 'http://owncloud/owncloud' and you will be presented with a post-install owncloud configure screen. Complete the form and your 'up and running'
However : whenever I set reverseproxy (pound) to expose the OwnCloud to internet it will NOT CONNECT via client (web access is ok). WEBDAV errors 301 and such will appear. The pound configuration used : (port 80 traffic on 'pfSense' external interface is forwarded to 'pound-box'.)
ListenHTTP Address 192.168.1.88 Port 80 Service HeadRequire "Host: .*owncloud.zdevops.com*" BackEnd Address 192.168.1.42 Port 80 End
End End
Luckily we tested 'local access' and determined everything was ok. Simple deduction tells us the cause of this error lies within the reverseproxy (as this is the added component). For the 'pound' reverse proxy an extra setting is needed to alllow for the extra HTTPrequests to be accepted. ('man pound' is actually very educational reading material!)
xHTTP value Defines which HTTP verbs are accepted. The possible values are: 0 (default) accept only standard HTTP requests (GET, POST, HEAD). 1 additionally allow extended HTTP requests (PUT, DELETE). 2 additionally allow standard WebDAV verbs (LOCK, UNLOCK, PROPFIND, PROPPATCH, SEARCH, MKCOL, MOVE, COPY, OPTIONS, TRACE, MKACTIVITY, CHECKOUT, MERGE, REPORT).
3 additionally allow MS extensions WebDAV verbs (SUBSCRIBE, UNSUBSCRIBE, NOTIFY, BPROPFIND, BPROPPATCH, POLL, BMOVE, BCOPY, BDELETE, CONNECT). 4 additionally allow MS RPC extensions verbs (RPC_IN_DATA, RPC_OUT_DATA).
Seeing as we had nothing configured in the pound.cfg the default was being used. After upgrading to 'xHTTP 2' everything worked as intended! Below the altered pound.cfg
ListenHTTP Address 192.168.1.88 Port 80 xHTTP 2 Service HeadRequire "Host: .*owncloud.zdevops.com*" BackEnd Address 192.168.1.42 Port 80 End End End
Next Steps...
Sync-Client on iOS/iPhone
First of all, I installed/bought the ownCloud client app for the iPhone. It works like a charm. And it even had the option to sync address book and calendar entries. To do so, use the guide provided at the owncloud website (http://doc.owncloud.org/server/4.5/user_manual/sync_ios.html) it's pretty straight forward as you will see
Geo-Replication
I'd really like to setup 'geo-Repliacation' too. For this, I will need to implement/connect to some GlusterFS stuff at a remote site. This also is something 'for later'.
Glusterception
The one thing I am mentally designing is a 'glusterception' setup. With this I mean to make such a setup where the 'brick-disks' are in fact some glusterfs-volumes...from another Gluster, it might not be useful, but it will be funny.... It looks something like the following :
Closing Notes....
Thanks for reading through all of this, it's my first 'open sourced' documentation so any remarks, tips, tricks or comments are more than welcome. As stated this document is based on installation notes, which consist of 'copy-pasted' terminal output, screengrabs and hand-scribbled notes :) It should however get you up ad running with your own 'GlusterCloud'. In the odd case you are really implementing a GlusterCloud based on this MINI-HOWTO I am really curious to your opinions and results :) Thanks to @tomsan68 for bringing my attention towards the GlusterFS 'product'. A big shout out to the wonderful people who make GlusterFS possible and a big salute to the creators of ownCloud for such a marvelous product! That's me done.......see you next time?