/profilephoto/logo.svg

The personal blog of Sam McLeod

Mirroring a Gitlab project to Github

Let’s pretend you have a project on Gitlab called ask-izzy and you want to mirror it up to Gitlab which is located at https://github.com/ask-izzy/ask-izzy Assuming you’re running Gitlab as the default user of git and that your repositories are stored in /mnt/repositories you can following something similar to the following instructions: Grant write access to Github Get your Gitlab install’s pubkey from the git user cat /home/git/.ssh/id_rsa.pub On Github add this pubkey as deploy key on the repo, make sure you tick the option to allow write access. Add a post-receive hook to the Gitlab project mkdir /mnt/repositories/developers/ask-izzy.git/custom_hooks/ echo "exec git push --quiet github &" > \ /mnt/repositories/developers/ask-izzy.

AskIzzy

Today we launched a mobile website for homeless people … and it was launched by one of Australia’s many recent Prime Ministers Today alone we served up over 87,000 requestsAs many of you know, I work with Infoxchange as the operations lead. When I first heard the idea of a website or app for people that have found or are worried about finding themselves homeless in Australia I really didn’t think it made sense - until I saw the stats showing how many homeless people in Australia have regular access to a smart phone and data either via a cellular provider or free WiFi.

Fix XenServer SR with corrupt or invalid metadata

If a disk / VDI is orphaned or only partially deleted you’ll notice that under the SR it’s not assigned to any VM. This can cause issues that look like metadata corruption resulting in the inability to migrate VMs or edit storage. For example: [[email protected] ~]# xe vdi-destroy uuid=6c2cd848-ac0e-441c-9cd6-9865fca7fe8b Error code: SR_BACKEND_FAILURE_181 Error parameters: , Error in Metadata volume operation for SR. [opterr=VDI delete operation failed for parameters: /dev/VG_XenStorage-3ae1df17-06ee-7202-eb92-72c266134e16/MGT, 6c2cd848-ac0e-441c-9cd6-9865fca7fe8b. Error: Failed to write file with params [3, 0, 512, 512]. Error: 5], Removing stale VDIsTo fix this, you need to remove those VDIs from the SR after first deleting the logical volume:

iSCSI SCSI-ID / Serial Persistence

“Having a SCSI ID is a f*cking idiotic thing to do.” - Linus Torvalds …and after the amount of time I’ve wasted getting XenServer to play nicely with LIO iSCSI failover I tend to agree. The Problem One oddity of Xen / XenServer’s storage subsystem is that it identifies iSCSI storage repositories via a calculated SCSI ID rather than the iSCSI Serial - which would be the sane thing to do. Citrix’s less than ideal take on dealing with SCSI ID changes is for you to take your VMs offline, disconnected the storage repositories, recreate them, then go through all your VMs and re-attach their orphaned disks hoping that you remembered to add some sort of hint as to what VM they belong to, then finally wipe the sweat and tears from your face.

Replacing Junos Pulse with OpenConnect

In an attempt to avoid using the Juniper Pulse (Now Pulse Secure) VPN client we tried OpenConnect but found that DNS did not work correctly when connected to the VPN. This bug has now been resolved recently but has not made it’s way into a new build, in fact there have been no releases for 6 months. Luckily the OpenConnect was not too difficult to build from source. Build OpenConnect on OSX Remove old openconnect and install depsbrew remove openconnect brew install libxml2 lzlib openssl libtool libevent Build openconnectwget git.infradead.org/users/dwmw2/openconnect.git/snapshot/0f1ec30d17aa674142552e275bf3fac30d891b39.tar.gz tar zxvf 0f1ec30d17aa674142552e275bf3fac30d891b39.tar.gz cd openconnect-0f1ec30 LIBTOOLIZE=glibtoolize ./autogen.sh PATH=/usr/local/opt/gettext/bin:$PATH ./configure make make install To connectsudo openconnect --juniper -u myusername www.

SSD Storage - Two Months In Production

Over the last two months I’ve been running selected IO intensive servers off the the SSD storage cluster, these hosts include (among others) our: Primary Puppetmaster Gitlab server Redmine app and database servers Nagios servers Several Docker database host servers ReliabilityWe haven’t had any software or hardware failures since commissioning the storage units. During this time we have had 3 disk failures on our HP StoreVirtual SANs that have required us to call the supporting vendor and replace failed disks. We have performed a great deal of live cluster failovers without any noticeable interruption to services and with no unexpected results.

OS X Software Update Channels For Betas

Set update channel to receive developer beta updatesudo softwareupdate --set-catalog https://swscan.apple.com/content/catalogs/others/index-10.11seed-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog.gz Set update channel to receive public beta updatesudo softwareupdate --set-catalog https://swscan.apple.com/content/catalogs/others/index-10.11beta-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog.gz List available updatessudo softwareupdate --list Set update channel to receive default, stable updatessudo softwareupdate --clear-catalog Show current settingsdefaults read /Library/Preferences/com.apple.SoftwareUpdate.plist Write setting manuallydefaults write /Library/Preferences/com.apple.SoftwareUpdate CatalogURL https://swscan.apple.com/content/catalogs/others/index-10.11beta-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog.gz

iSCSI Benchmarking

The following are benchmarks from our testings of our iSCSI SSD storage. 67,300 read IOP/s on a VM on iSCSI (Disk -> LVM -> MDADM -> DRBD -> iSCSI target -> Network -> XenServer iSCSI Client -> VM) Per VM and scales to 1,000,000 IOP/s total [email protected]:/mnt/pmt1 128 # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=128 --size=2G --readwrite=read test: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=128 2.0.8 Starting 1 process bs: 1 (f=1): [R] [55.6% done] [262.1M/0K /s] [67.3K/0 iops] [eta 00m:04s] 38,500 random 4k write IOP/s on a VM on iSCSI (Disk -> LVM -> MDADM -> DRBD -> iSCSI target -> Network -> XenServer iSCSI Client -> VM) Per VM and scales to 700,000 IOP/s total [email protected]:/mnt/pmt1 # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=128 --size=2G --readwrite=randwrite test: (g=0): rw=randwrite, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=128 2.

Delayed Serial STONITH

A modified version of John Sutton’s rcd_serial cable coupled with our Supermicro reset switch hijacker: This works with the rcd_serial fence agent plugin. Reasons rcd_serial makes for a very good STONITH mechanism: It has no dependency on power state. It has no dependency on network state. It has no dependency on node operational state. It has no dependency on external hardware. It costs less that $5 + time to build. It is incredibly simple and reliable. Essentially the most common STONITH agent type in use is probably those that control UPS / PDUs, while this sounds like a good idea in theory there are a number of issues with relying on a UPS / PDU: