The personal blog of Sam McLeod
In vim, you can add a comment at the top of files to set the syntax, e.g.:
# vim: syntax=ruby In SublimeText there are many ways to detect syntax, one interesting approach I’ve recently found useful is to match on the top line in the file. For example, with Puppet there is a file called Puppetfile, it has no extension but it’s really Ruby syntax, so it’s useful to add linting incase you miss something simple like a , and break deployments.
I use a plugin called ApplySyntax making it easy to apply syntax options to files, I believe you can do this in the languages syntax without the plugin but YMMV:
Earlier this week we started the process to upgrade one of our hypervisor compute clusters when we encountered a rather painful bug with HP’s Broadcom NIC chipsets.
We were part way through a routine rolling pool upgrade of our hypervisor (XenServer) cluster when we observed unexpected and intermittent loss of connectivity between several VMs, then entire XenServer hosts.
The problems appeared to impact hosts that hadn’t yet upgraded to XenServer 7.2. We now attribute this to a symptom of extreme packet loss between the hosts in the pool and thanks to buggy firmware from Broadcom and HP.
We were aware of the recently published issues with Broadcom/HP NICs used in VMware clusters where NICs would be bricked by a firmware upgrade.
We’re in the process of shifting from using our custom ‘glue’ for orchestrating Docker deployments to Kubernetes, When we first deployed Docker to replace LXC and our legacy Puppet-heavy application configuration and deployment systems there really wasn’t any existing tool to manage this, thus we rolled our own, mainly a few Ruby scripts combined with a Puppet / Hiera / Mcollective driven workflow.
The main objective is to replace our legacy NFS file servers used to host uploads / attachments and static files for our web applications, while NFS(v4) performance is adequate, it is a clear single point of failure and of course, there are the age old stale mount problems should network interruptions occur.
Of all the tools for reading news and subscribing to software releases, I still find RSS the most useful.
I use Feedly to manage my rss subscriptions and keep all my devices in sync, but instead of using the Feedly’s own client, I use an app called Reeder as the client / reader itself.
Link: My Feedly RSS Feed FeedlyRSS feed subscription management
Features:
Keyword alerts. Browser plugins to subscribe to (current) url. Notation and highlighting support (a bit like Evernote). Search and filtering across large numbers of feeds / content. IFTTT, Zapier, Buffer and Hootsuite integration. Built in save / share functionality (that I only use when I’m on the website).
MH-Z19 CO2 sensor reader, logger and visualiser Reads data from UART(serial)-connected MH-Z19 (or MH-Z14) sensor using python 3. If you dare to install nodejs you can visualise the logged data (using html and plotly.js library). Repository: sammcj/CO2-Logger
UsageNote this post is from 2016, in 2021 I replaced my custom Co2 loggers with an Aranet4. While very expensive, is and excellent off-the-shelf solution, with many features.
ConnectionSensor can be queried using 3.3v UART at 9600 bps. Sensor main feed voltage is 5v.
Can be connected to computer using almost any USB-UART converter if voltage matches.
Requirements Python3 pip3 install serial pyserial A USB -> Serial UART connector (Could also be used with Raspberry Pi, ESP8266, Arduino etc…) MH-Z19 Co2 Sensor (Around $25AUD on eBay) Querying$ python3 CO2Reader.
I wanted to try Android for a couple of weeks, I like staying on top of technology, gadgets and making sure I never become a blind ‘zealot’ for any platform or brand.
The OnePlus 3I did a lot of research and decided to try the “Oneplus 3” as it was good bang-for-buck, ran the latest software had plenty of grunt with the latest 8 core, high clock speed Qualcomm processor coupled with 6GB of DDR4 - the specs really are very impressive, especially for a $400USD phone.
Hardware wise the unit is lovely, not as high build quality as my iPhone 6s+ but not nearly as bad as the Samsung Galaxy S3 or other Samsung devices I had tried in the past.
note: This is a follow up post from 2015-07-21-rcd-stonith
A Linux Cluster Base STONITH provider for use with modern Pacemaker clustersThis has since been accepted and merged into Fedora’s code base and as such will make it’s way to RHEL.
Source Code: Github Diptrace CAD Design: Github I have open sourced the CAD circuit design and made this available within this repo under CAD Design and Schematics Related RedHat Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1240868 v1 vs v2/v3 versions of the rcd_serial STONITH systemThe v2/v3 cables include the following improvements:
Have a connector on the outside of the server (that’s female side runs from the reset pin ‘hijacker’) so that cables can be easily disconnected.
Ever forgotten to add a critical service to monitoring?
Want to know if a service or process fails without explicitly monitoring every service on a host?
…Then why not use SystemD’s existing knowledge of all the enabled services? Thanks to ‘Kbyte’ who made a simple Nagios plugin to do just this!
Requirements Python3 (For RHEL/CentOS 7 yum install python34) python-nagiosplugin My pre-built RPMs or pip3 install nagiosplugin PyNagSystemD
Scripts and source available here: sql_ascii_to_utf8
The GoalTo be able to take a Postgres Database which is in SQL_ASCII encoding, and import it into a UTF8 encoded database.
Requirements:
Python3 (For RHEL/CentOS 7 yum install python34) python-nagiosplugin My pre-built RPMs or pip3 install nagiosplugin PyNagSystemD The ProblemPostreSQL will generate errors like this if it encounters any non-UTF8 byte-sequences during a database restore:
# pg_dump -Fc test_badchar | pg_restore -d test_badchar_utf8 pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 2839; 0 26852 TABLE DATA table101 postgres pg_restore: [archiver (db)] COPY failed for table "table101": ERROR: invalid byte sequence for encoding "UTF8": 0x91 CONTEXT: COPY table101, line 1 WARNING: errors ignored on restore: 1 And the corresponding data will be omitted from the database (in this case, the whole table, even the rows which did not have a problem):
The most common way to use rsync is probably as such:
rsync -avr user@<source>:<source_dir> <dest_dir> Resulting in 30-35MB/s depending on file sizes
This can be improved by using a more efficient, less secure encryption algorithm, disabling compression and telling the SSH client to disable some unneeded features that slow things down.
With the settings below I have achieved 100MB/s (at work between VMs) and over 300MB/s at home between SSD drives.
Requirements Python3 (For RHEL/CentOS 7 yum install python34) python-nagiosplugin My pre-built RPMs or pip3 install nagiosplugin PyNagSystemD rsync -arv --numeric-ids --progress -e "ssh -T -c
[email protected] -o Compression=no -x" user@<source>:<source_dir> <dest_dir> If you want to delete files at the DST that have been deleted at the SRC (obviously use with caution:
This is a quick tldr there are many other situations and options you could consider FIO man page IOP/s = Input or Output operations per second Throughput = How many MB/s can you read/write continuously Variables worth tuning based on your situation --iodepth The iodepth is very dependant on your hardware.
Rotational drives without much cache and high latency (i.e. desktop SATA drives) will not benefit from a large iodepth, Values between 16 to 64 could be sensible.
High speed, lower latency SSDs (especially NVMe devices) can utilise a much higher iodepth, Values between 256 to 4096 could be sensible.