Thursday, October 03, 2013

Install GRUB onto multiple boot disks in Software RAID-1 (quick reference)

Here is an example where I have a 3-way RAID-1 array. The /boot partition is stored at /dev/md0. This installs GRUB to each disk, so that if one disk fails, you can boot off one of the other disks.

# grub
grub> find /grub/stage1
 (hd0,0)
 (hd1,0)
 (hd2,0)
grub> device (hd0) /dev/sda
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/sdc
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

With this you should be able to now boot from any of the disks in the RAID-1 array, no matter what boot order you set in the BIOS.

For safety, I suggest using UUIDs in your /etc/fstab file for your /boot and / (root) partitions. This way the machine will boot off the UUIDs of the file systems, even if mdadm (software RAID) decides to renumber your /dev/md# devices. Note: This is the default behavior in RHEL 6 / CentOS 6.

Monday, September 09, 2013

View and changing the SSH HostKey files

With all of the NSA leaks in the past few months, I figured it was a good time to go look at the SSH keys that we use on the servers and decide whether we want to re-key things. Naturally, this is a bit of a PITA because you'll have to let all clients know that the SSH host key changed and users will have to edit their ~/.ssh/known_hosts file.

First off, let's look at the current key information (using the "-l" option to display the signature, and the "-f filename" option to look at an existing file):

# /usr/bin/ssh-keygen -l -f /etc/ssh/ssh_host_dsa_key
1024 86:72:0c:d8:47:ce:c4:4a:79:25:9b:ad:22:1b:de:87 /etc/ssh/ssh_host_dsa_key.pub (DSA)
# /usr/bin/ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key 
3072 2b:3d:27:77:49:bf:05:09:ee:b7:74:68:e8:f3:fc:3f /etc/ssh/ssh_host_rsa_key.pub (RSA)

This displays a few useful pieces of information:

#1 - The key size is 1024 for the DSA key. All DSA keys are 1024 bits in size due to FIPS 186-2 (Federal Information Processing Standard 186-2). While there is a newer FIPS 186-3 and FIPS 186-4 standard that allows larger DSA keys, I'm not sure how well supported it is in OpenSSH.

My RSA key is 3072 bits in size instead of the default 2048 bits in CentOS 6. Older releases had a default of only RSA/1024 bits, which is considered to be a bit weak today. The current recommended minimum is 2048 bits and the maximum in common use is 4096 bits.

A good read is Anatomy of a change - Google announces it will double its SSL key sizes.

#2 - The key signature, which should be communicated to your users via out-of-band communications.

To re-key, I suggest using the following for DSA keys:

# /usr/bin/ssh-keygen -N '' -C 'servername SSH host key Sep 2013' -t dsa -f /etc/ssh/ssh_host_dsa_key
Generating public/private dsa key pair.
/etc/ssh/ssh_host_dsa_key already exists.
Overwrite (y/n)? y
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
86:72:0c:d8:47:ce:c4:4a:79:25:9b:ad:22:1b:de:87 severname SSH host key Sep 2013

For RSA keys, you need to change "-t dsa" to "-t rsa", change the filename, and add a "-b 2048" option before the "-f filename" option. Suggested key sizes are 2048 for short-term, 3172 for 1-2 decades, and 4096 for keys that will be in use past 2030. The downside is that as key length doubles, performance drops by a factor of 6-7x.

Friday, August 23, 2013

Linux server partitioning with mdadm software RAID and LVM2

Over the years, I've really come to appreciate what judicious use of LVM (or LVM2) brings to the table when administering servers. If you rely on it heavily and leverage it properly, you can do things like:
  • Snapshot any file system (other then /boot) for backups, or to make images, or to test something out.
  • Migrate a logical volume (LV) from one physical volume (PV) to another, without having to take the file system offline or deal with downtime.
  • Resize file systems that are too large or too small, with minimal downtime (if the file system supports it).
Basically, other then /boot, if you're thinking of creating a separate partition or Software RAID device the you should be using LVM instead of physical partitions or RAID devices. You gain a lot of flexibility in the long-run and setting up LVM on top of hardware or software RAID or plain old disks is no longer that difficult.

These days, when I setup disk partitions to hold a server's boot-up files, I only create (2) partitions on the drive. One for the Software RAID-1 mirror set to hold /boot (usually 256-1024MB) and the rest of the drive is a second RAID-1 mirror set that is turned into a LVM physical volume (PV) and assigned to a volume group. I will usually only partition out to about 99% of the drive size if I'm doing Software RAID, because that makes it easier later to put in a different model disk of the same capacity and still have things work. Drives from different manufacturers have slightly different capacities, so you can run into trouble down the road when you go to replace a failed drive if you assumed all drives were exactly the size as your original drives.

Inside that LVM volume group (VG), I create all of my other partitions. These days, that means:
  • / - the "root" partition, usually 16-32GB for CentOS 6
  • /home - Usually starts at 4GB for a server where people won't be logging in much.
  • /opt - 4-24GB
  • /srv - 1-4GB (sub-directories get their own LV later)
  • /tmp - 4-24GB
  • /usr/local - 8-24GB
  • /var - 4-24GB
  • /var/log - 4-24GB
  • /var/spool - 2-4GB to start
  • /var/tmp - 2-8GB to start
And that's just the basic operating system file systems. For things like squid, e-mail, web servers, samba shares, etc., each of those will get its own LV, allocated from the server-wide volume group.

Follow ups:
  • GRUB2 understands mdadm (software RAID) and LVM. So we will eventually be able to put /boot in an LVM volume. But the GRUB that ships with RHEL6 and CentOS6 is still GRUB 0.97.

Sunday, August 18, 2013

TLS: SSL_read() failed: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca: SSL alert number 48

Here's a fun error message that we're getting on our mail server at the office:

Aug 15 10:52:26 fvs-pri dovecot: imap-login: Disconnected (no auth attempts in 1 secs): user=<>, rip=172.30.0.221, lip=172.30.0.1, TLS: SSL_read() failed: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca: SSL alert number 48, session=

The odd thing is that using public SSL testing tools (such as the one at DigiCert) do not indicate any problems with the mail server's SSL configuration. And this only seems to affect some clients, and is possibly only acting up with Dovecot. So my guess is that Apache/OpenSSL is configured correctly, but Dovecot is not.

The key to figuring this out is the "openssl s_client" command:

openssl s_client -connect mail.example.com:143 -starttls imap

This showed us that the openssl library was having problems validating the server's certificate, because the intermediate certificates were not also stored in the certificate file that gets sent to the client. The solution is to adjust the file pointed to by Dovecot's "ssl_cert" argument and add your certificate vendor's intermediate certificates to the end of the file.

The order of the certificates inside that file matters. Your server certificate needs to be first, then list the rest of the certificates in order as you move up the certificate chain to the root CA.

Wednesday, August 14, 2013

LUKS: /dev/mapper "read failed after 0 of 4096 at 0: Input/output error"

We're using external USB drives for our backups, protected using LUKS/cryptsetup. On our 3-4 year old Opteron 2210 HE CPU running at 1.8GHz, we estimate that LUKS can perform about 60-70 MB/s per CPU core. We mount the LUKS volumes automatically (at server boot) by listing them in /etc/cryptsetup and using a key-file instead of having to enter a password and autofs handles the automatic mounting/dismounting of the ext4 file system inside the LUKS volume.

It all works very well, until you remove the USB device and then run pvscan/lvscan commands of LVM. Which then throws the following errors:
# pvscan
  /dev/mapper/USBOFFSITE12B: read failed after 0 of 4096 at 999201636352: Input/output error
  /dev/mapper/USBOFFSITE12B: read failed after 0 of 4096 at 999201693696: Input/output error
  /dev/mapper/USBOFFSITE12B: read failed after 0 of 4096 at 0: Input/output error
  /dev/mapper/USBOFFSITE12B: read failed after 0 of 4096 at 4096: Input/output error
  PV /dev/md12   VG vg13   lvm2 [2.67 TiB / 821.62 GiB free]
  Total: 1 [2.67 TiB] / in use: 1 [2.67 TiB] / in no VG: 0 [0   ]

If 'autofs' would allow us to execute a script after it dismounts the drive from inactivity, it might be a good option for us. But I'm not sure that is possible.

Another option would be that at the end of the daily backup script which copies files off to the attached device, we dismount it automatically.

A third option would be to check every hour and see whether the /dev/mapper/NAME is no longer in use, then tell cryptsetup to dismount it. The command to check that might be "dmsetup ls --tree -o uuid,open | grep -i 'CRYPT-LUKS'".

Still exploring options at this point. I need to do some more testing first. I'm also searching for a way to auto-open LUKS volumes upon device insertion.

Monday, August 12, 2013

Auto-mounted external USB drives with Linux autofs daemon

This explains how to use 'autofs' to automatically mount external USB hard drives at a predictable path under CentOS 6 (but probably also works for CentOS 5, RHEL 5, RHEL 6, etc.). On one of my Ubuntu desktops; file systems on USB drives get automatically mounted, but that requires a GUI environment to be installed. On our servers at work, we generally do not install a GUI environment. We also had some special requirements in order to use the USB drives as backup targets:
  • The USB drive needs to mount at a standard location. Such as /mnt/offsite/LABEL or something. This way the backup scripts are not overly complex.
  • Mounting needs to be automatic, without any user intervention.
  • Ideally, the file system should be dismounted when not in use. That way the user who swaps out the backup drives only needs to check the drive activity light before knowing that it is safe to swap out the drive.

So the standard Ubuntu method of doing things via Gnome comes close, but I explored other options as well. The one I settled on is called 'autofs'. It is a script/daemon that is found in the standard CentOS 6 repositories, so you just need to run "yum install autofs".

Configuration of the autofs daemon consists of two parts:

A. You need to find and edit the master configuration file for autofs, also called the 'master map file'. Under CentOS 6, this is located at /etc/auto.master, or you can look at '/etc/sysconfig/autofs ' to find out the configuration file location.

If you want to mount your USB backup drives at /mnt/offsite/XYZ, then your auto.master file only needs to contain the following:

# USB backup drives
/mnt/offsite            /etc/auto.offsite       --timeout=1800

As you can see, you tell autofs the location, what configuration file to look at for devices that need to be mounted, and optional arguments such as custom timeout settings. Note that you need to create the location directory by hand (e.g. "# mkdir /mnt/offsite") before starting autofs.

B. The second part of configuration is telling autofs which devices should be automatically mounted. It is best if you use either the UUID of the partition (/dev/disk/by-uuid/XYZ) or the device ID (/dev/disk/by-id/XYZ-part0).

OFFSITE1 -fstype=ext4,rw,noatime,data=journal,commit=1 
    :/dev/disk/by-uuid/b5c1db0d-776f-499b-b4f2-ac53ec3bf0ef

Please note that the above should be all on line line, I have broken it up for clarity.

The only real mount options that you need is "fstype=auto,rw,noatime". I have added "data=journal,commit=1" to make the ext4 file system on the drive a bit more resilient.

One limitation of autofs is that if you have multiple USB drives to be mounted, each one needs its own unique mount point (/mnt/offsite/OFFSITE1, /mnt/offsite/OFFSITE2, etc.). However, you could decide to mount all of the drives at the same location if you give them all the same UUID. But I'm not sure how well autofs would deal with two drives, having the same UUID, being hooked up to the server at the same time.



After editing your mapping file, you (probably) need to restart autofs. Assuming that you have done everything correctly, attempting to list the contents 'ls -l /mnt/offsite/OFFSITE1' will cause the drive to be automatically mounted. After the timeout period expires, the drive will automatically dismount.

Wednesday, May 29, 2013

Dovecot fails to compile against PostgreSQL 9.2

If you are trying to compile dovecot against PostgreSQL, then you are probably running configure with the "--with-pgsql" option.  Except that if you installed PostgreSQL via the PGDG repository on Centos 6, you are probably stuck with the following error:

checking for shadow.h... yes
checking for pam_start in -lpam... no
checking for auth_userokay... no
checking for pg_config... NO
checking for PQconnectdb in -lpq... no
configure: error: Can't build with PostgreSQL support: libpq not found


The root cause is that pg_config is not in the PATH statement.  So you should add "/usr/pgsql-9.2/bin" to your PATH statement before calling ./configure.  If you are not sure where pg_config is located, try "find / -name pg_config".

#!/bin/bash
PATH=/usr/pgsql-9.2/bin:$PATH
export PATH
./configure \
        --with-pgsql


Once you do the above, the dovecot 2.2 configure script will run to completion.

Saturday, May 18, 2013

FSVS automated snapshots

One common use for FSVS is to make automated snapshots of portions of your Linux file system.  Such as monitoring changes to log files, executables, or data directories.  The downside of this is that, even if nothing as changed, FSVS will still generate a SVN commit.  So if you are running FSVS on an hourly basis through the day, your SVN log will be cluttered with hundreds of commits that contain no useful information.

The following is an example of how we keep track of changes to a /cfmc directory on the server.  This runs hourly and is our primary backup against data loss on this server.  Because FSVS and SVN only send the differences across the wire, it's a very efficient method.  And since this is protecting client data, we're going to version control it and keep it for years.

The magic trick is the FCOUNT= line which runs FSVS and looks to see whether there were any changes to files in the monitored directory tree.  If it found changes, then we go ahead and do an automated commit.

#!/bin/sh
# Only executes FSVS if FSVS reports outstanding changes

FSVS_CONF=~/.fsvs-conf
FSVS_WAA=~/.fsvs-waa
export FSVS_CONF FSVS_WAA

cd /cfmc

FCOUNT=`/usr/local/bin/fsvs | grep -v 'dir.*\.$' | wc -l`

if [ $FCOUNT -gt 0 ] ; then
    /usr/local/bin/fsvs ci -m "Automatic FSVS snapshot"
else
    echo "Nothing changed"
fi

Wednesday, February 13, 2013

Backing up SVN (SubVersion) repository directories

When backing up subversion (SVN) repositories, I find it best to use a bash shell script to search for the SVN repositories.  These can then be passed to the svnadmin hotcopy command or the svnadmin dump command to dump out each repository by itself.

First off, you should define a few variables at the top of your bash shell script.  The key one is ${BASE} which lets you define the location of your SVN repositories. 

# BASE should be location of SVN repositories (no trailing slash)
# such as: BASE=`pwd` or BASE="/var/svn"
BASE="/var/svn"


FIND=/usr/bin/find
GREP=/bin/grep
RM=/bin/rm
SED=/bin/sed


Next is the bit of find/grep/sed magic that turns the list of directories that contain SVN repositories into a list of repository directories.  In this particular case, we are searching for the item named 'current' at a maximum depth of 3 directories deep, then making sure it is 'db/current' in the full pathname.  Last, we sort the list of paths so that we process things in alphabetical order.

DIRS=`$FIND ${BASE} -maxdepth 3 -name current | \
    $GREP 'db/current$' | $SED 's:/db/current$::' | $SED "s:^${BASE}/::" | \
    sort`

As an alternative to processing in alphabetical order, you can use the following perl fragment to randomize the order of the directories.  The advantage of this is that if your backup script breaks for some reason, in the middle of the backup, you have a far higher chance that directory backups at the bottom of the list won't be too far out of date (they might be a few days old, but probably not a few months old).  This is an especially good idea if you are sending the backups out over a WAN link using rsync.

We also, in order to speed up our backups, only search for repositories modified on-disk in the last 15 days.

DIRS=`$FIND ${BASE} -maxdepth 3 -name current -mtime -15 | \
    $GREP 'db/current$' | $SED 's:/db/current$::' | $SED "s:^${BASE}/::" | \
    perl -MList::Util -e 'print List::Util::shuffle <>'`


The loop portion is simply (this particular example shows how to use "svnadmin verify"):

for DIR in ${DIRS}
do

    echo "verifying ${DIR}"
    svnadmin verify --quiet ${BASE}/${DIR}
    status=$?
    if [ $status -ne 0 ]; then
        echo "svnadmin verify FAILED with status: $status"
    else
        echo "svnadmin verify succeeded"
    fi

    echo ""
done

Hope these tricks help.

Tuesday, January 15, 2013

mdadm: Using bitmaps to speed up rebuilds

As SATA drives have gotten larger, the chance of a minor error creeping in during a RAID rebuild has greatly increased.  For the new 2-4 terabyte models, assuming a rebuild rate of 100 MB/s, a mirror rebuild can take 5-10 hours.  The situation is even worse for RAID 5 and RAID 6 arrays where you have to update multiple discs and rebuild times tend to scale out with the total size of the array.

One low-level solution I have been using is to partition the drive into smaller segments (usually 1/4 to 1/2 of the drive capacity), use mdadm Software RAID across each segment, then put all the segments into a single LVM Volume Group (VG).  The advantage is that it's simple and often only a single segment of the drive has to be re-sync'd from scratch if there is a power outage or other glitch during the rebuild.

The other (probably better) solution is to use the mdadm --bitmap option (kernel.org link).  This allows the array to keep track of which blocks are dirty (not yet sync'd to all disks) or clean.  It speeds up resync operations greatly if there is a power failure or glitch during the write operation.  The main disadvantage is that you are looking at three write operations whenever you change a bit of data.  First, mdadm has to mark the bit relating to that section of the disk as dirty.  Second, it writes out the data.  Third, it has to go back and mark the bit as clean.  This can severely impact performance.

By default, when using internal bitmaps, mdadm splits the disk into as many chunks as possible given the small size of the bitmap area.  For smaller partitions, this can be as small as 4MiB, but you can also specify larger values with the "--bitmap-chunks=NNNN" argument.  For larger drives, you will want to consider chunk sizes of at least 16-128MiB.

Warnings:

- I've run into a situation where my version of mdadm (v2.6.9 - 10th March 2009, Linux version 2.6.18-194.32.1.el5) would cause the machine to lock up hard when removing a bitmap.  Another machine has a newer CentOS5 kernel (2.6.18-308.16.1.el5xen) and experienced no issues.  So make sure you are running a fairly recent kernel.

Instructions:

In order to add bitmaps to an existing Software RAID array, the array must be "clean".  The command is simply:

# mdadm --grow --bitmap=internal --bitmap-chunks=32768

If you want to resize the bitmap chunk, you must first remove the existing bitmap:

# mdadm --grow --bitmap=none

Performance:

I did some testing on a system I had access to which had a 7 drive RAID-10 array (6 active spindles, 1 spare) using 7200 RPM 500GB SATA drives.  Values are in KB/sec using bonnie++ as the test program (15GB test size).

#1 No bitmap:
Seq Write: 139035
Seq ReWrite:  43732
Seq Read: 76221

#2 bitmap size of 4096KiB
Seq Write: 109720 (27% lower)
Seq ReWrite: 40179
Seq Read: 72917

#3 bitmap size of 16384KiB
Seq Write: 127924 (8.7% lower)
Seq ReWrite: 40734
Seq Read: 73870

#4 bitmap size of 65536KiB
Seq Write: 124694 (12% lower)
Seq ReWrite: 40674
Seq Read: 74501

As can be seen, the larger chunk sizes do not impact sequential write performance as much.

Tuesday, September 18, 2012

MultiPar (the spiritual successor to QuickPar)

We archive a lot of data onto CD/DVD, which has never been a reliable medium even if you use the high quality Taiyo Yuden media.  As a result of this issue where a CD/DVD can become unreadable over the course of years / decades, you have to take one of two approaches:

1) Burn a second (or third) copy of every CD/DVD that you create.  The primary downside is that you double the number of disks that you have to keep track of.  If you store the disks in two separate geographical locations, this is not necessarily a bad thing.  But back when media was far more expensive, this also drove up your costs a lot.  You still need to create some sort of checksum / verification data at the file level so that you can validate your archives down the road (such as MD5 or SHA1 hashes).

2) Add some sort of parity / error recovery data to the disk contents.  While CD/DVD media both include Reed-Solomon error correction at the sector level, you can't always get information about how clean the disc is and whether or not it is failing.  In many cases, the first sign of trouble occurs after the point where the built-in error correction is no longer able to do its job.  So you use a program like WinRAR, QuickPAR, par1 or par2 command line programs, or something else to create additional error correction data and add it to the data being written to the media.

An important concept when dealing with long term archival is "recovery window".  In most cases, when media starts to fail, it is a progressive condition where only a few sectors will have issues at the start.  As time goes on, more and more sectors will fail verification and less and less data will be recoverable.  The exception to this is if the TOC (table of contents) track goes bad, which will then require the use of special hardware in order to read any data off of the media.

In the case of the above approaches:

1) Multiple copies -- The recovery window is from the point that you find out that one of the copies has failed until you make a copy of one of the remaining copies that is still valid. Depending on where the physical media is located, this might be a quick process, or it might require a few days to transport media between locations.  The problem comes when multiple copies are experiencing data loss, because you will need to hope that the same files/sectors on both media are not corrupt on all copies.

Note that the multiple copies approach is only recoverable at the "file" level in most archive situations.  Most verification codes are calculated at the file level, which means a file is either completely good or completely bad.  Unless the file contains internal consistency checks, you cannot combine two damaged files to create a new undamaged copy.

2) Error correction data -- Again, the recovery window starts at the point in time where you first discover an issue.  But because the error correction data lives on the disk next to the raw data, you are able to immediately determine whether the media has failed to the point where data is actually lost.  Some of the tools (QuickPar in particular) used to create verification data can even recover disks where the file system has been corrupted by digging through the block level data and pulling out good blocks.

Note that the two approaches are not exclusive to each other.  For the truly paranoid, creating two copies of the media along with dedicating 5-20% of the media's capacity to error correction will give you lots of options when dealing with "bit rot".

So, back to the original point of the posting...

We used to use QuickPar to create our recovery data.  It was written for Windows XP and had a nice GUI which made it quick to bundle up a bunch of files and create recovery data for those files.  Speed was fairly good, but it never did multi-threading nor did it ever support subdirectories.  It has also not been updated since the 2003-2004 timeframe, so is a bit of a "dead" project.

The successor to QuickPar, for those of us wanting a Windows program with a GUI, seems to be MultiPar.  I stumbled across this from Stuart's Blog posting about MultiPar.  Even though the download page is written in Japanese, MultiPar does have an English GUI option.  Just look for the green button in the middle of the page which says "Download now" and look at the filename (such as "MultiPar121_setup.exe").

Thursday, June 14, 2012

Windows 2003: Loses connection to a network share on a Windows 7 machine

This has been perplexing us at work for a bit this week.  We have a Windows 2003 server which attempts to send its backup logs to a desktop PC.  It used to send it to a Windows XP machine, but that has now been replaced with a Windows 7 Professional workstation.

The problem is that everything works fine for 10-20 minutes at a time.  Then the Windows 2003 server will lose connection to the Windows 7 desktop and you will be unable to map to the share points on the Win7 machine until you reboot the Windows 7 desktop.

On the Windows 2003 server you will see:

C:> net view \\hostname.example.com
System error 58 has occurred.

The specified server cannot perform the requested operation.

This will also show up in the error log on the Windows 7 as:

Error 2017: The server was unable to allocate from the system nonpaged pool because the server reached the configured limit for nonpaged pool allocations.

The fix for this is two-fold and is all performed on the Win7 machine.  It requires editing a pair of registry entries on the Win7 machine.

1) HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache

This gets changed from "0" (zero) to "1" (one).

2) HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size

This gets changed from "1" to "3".

Reference links:

Windows 7 Nonpaged Pool Srv Error 2017
Systems Internals Tips and Trivia

Saturday, May 12, 2012

SpringSource Tool Suite, Spring Roo 1.2.1 shell will not start

The dreaded "Please stand by until the Roo Shell is completely loaded." message when setting up a new machine with SpringSource Tool Suite 2.9.1 can be very frustrating (Spring Roo 1.2.1).  This is on a fairly new install of Windows 7 64bit and the symptom will be that the Roo shell never manages to get past the "Please stand by" message.

Another symptom is that if you attempt to create a new Spring Roo project using the wizard, the only things that will show up in the Package Explorer is the "JRE System Library" tree.  The wizard will not be able to create the standard src/main/java, src/test/java, or the pom.xml files.

The problem boils down to permission issues in Windows 7 when running as a restricted user and Spring Roo's desire to create files/folders under "C:\Program Files\springsource".

The simple workaround is to go to "C:\Program Files\springsource", go into properties, then the Security tab.  Click the UAC "Edit" button and give the "Users" group permission to Modify (and Write) files under the springsource folder.

Once you restart STS 2.9.1, things should now work if you attempt to open up the Spring Roo shell.

A second issue that you may run into is that by default, Spring Roo expects to create a Java 1.6 project, not a Java 1.7 project.  If you have the 1.7 JDK installed, you may need to also install the older 1.6 JDK (or figure out how to tell Spring Roo to create 1.7 projects by default).

Friday, March 09, 2012

Unsupported sector size 520

While setting up a new DIY server using 15k RPM SAS drives, I ran across some drives which came from the factory with 520 byte sectors (oops).  You'll be able to hook them up and see that they show up as /dev/sdX, but you won't be able to do anything with them (such as read/write or partition).

If you look in /var/log/messages, you will see useful information:

Mar  9 08:08:54 vhc-carthage kernel: sd 6:0:7:0: [sdh] Attached SCSI disk
Mar  9 08:08:57 vhc-carthage kernel: ...ready
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] Unsupported sector size 520.
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] 0 512-byte logical blocks: (0 B/0 B)
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] 520-byte physical blocks
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] Write Protect is off
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] Write cache: disabled, read cache: enabled, supports DPO and FUA
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] Unsupported sector size 520.
Mar  9 08:08:57 vhc-carthage kernel: sd 6:0:8:0: [sdi] Attached SCSI disk

The fix for this is pretty simple in RHEL6/CentOS6.  Major pointers for this came from PissedOffAdmins.

# yum install sg3_utils

# sg_scan -i
/dev/sg8: scsi6 channel=0 id=7 lun=0
    NETAPP    X287_S15K5288A15  NA00 [rmb=0 cmdq=1 pqual=0 pdev=0x0]
/dev/sg9: scsi6 channel=0 id=8 lun=0
    NETAPP    X287_S15K5288A15  NA00 [rmb=0 cmdq=1 pqual=0 pdev=0x0]

Now you should format the offending drive using the "sg_format" command.

[root@vhc-carthage /]# sg_format --format --size=512 /dev/sg8
    NETAPP    X287_S15K5288A15  NA00   peripheral_type: disk [0x0]
Mode Sense (block descriptor) data, prior to changes:
  Number of blocks=573653847 [0x22314357]
  Block size=520 [0x208]

A FORMAT will commence in 10 seconds
    ALL data on /dev/sg8 will be DESTROYED
        Press control-C to abort

After this, the disk should work when connected to a regular SAS controller (in my case, a 16-port LSI).

Final note: You will probably find that the disks which have been converted from 520 byte sectors to 512 byte sectors will not perform as well as a disk that was originally manufactured for use as a 512 byte sector drive.  At least, that has been my experience with a sample size of just two drives (and I don't plan on purchasing the wrong drives again).

Monday, February 13, 2012

FSVS Installation on CentOS 6.2

(See prior posts on FSVS installation.)

The FSVS install tarball is available at fsvs.tigris.org.

I'm starting this time with a minimal CentOS 6.2 install.  No GUI installed, not much of anything.  As with all software installation from source packages, you should do this as a regular user on the box, not as the root user.

Download and compile FSVS

You might need to install "wget" first

$ sudo yum install wget

Now you can download and compile the FSVS package.  Note that you'll need to go through the above link for fsvs.tigris.org in order to find the actual download location.

$ cd /usr/local/src
$ sudo mkdir fsvs
$ sudo chown thomas:thomas fsvs
$ cd fsvs
$ wget http://download.fsvs-software.org/fsvs-1.2.4.tar.gz
$ tar xzf fsvs-1.2.4.tar.gz
$ cd fsvs-1.2.4
$ ./configure

Now, on a fresh CentOS 6.2 minimal install, you're going to have to install lots of extra things at this point.  Or you could compile it on another CentOS 6 box and copy it over.  But for our purposes, here is the list of all of the supplemental packages that you will have to install.  Note that the "subversino-devel" currently in the CentOS repositories is 1.6.11, which is not too far out of date.  In some cases, you may prefer to pull in 1.7 from RPMForge or somewhere else.

(Note that you could do all of these installations in a single "yum install" command.)

$ sudo yum install gcc make
$ sudo yum install subversion-devel
$ sudo yum install pcre-devel
$ sudo yum install gdbm-devel
$ sudo yum install openssh-clients

Once again, run the configure script and look for errors or missing dependencies.

$ ./configure

With no errors, we can now run the "make" command and then copy the "fsvs" executable to /usr/local/bin.

$ make
$ sudo cp src/fsvs /usr/local/bin/
$ sudo chmod 755 /usr/local/bin/fsvs

Setting up a repository on the SVN server

We'll assume that you have a SVN server elsewhere, where you want to store your system configuration and any files that you want to track.

This is how we setup users on our SVN server. Machine accounts are prefixed as "sys-" in front of the machine name. The SVN repository name matches the name of the machine. In general, only the machine account should have write access to the repository, although you may wish to add other users to the group so that they can gain read-only access.

# useradd -m sys-www-test
# passwd sys-www-test
# svnadmin create /var/svn/sys-www-test
# chmod -R 750 /var/svn/sys-www-test
# chmod -R g+s /var/svn/sys-www-test/db
# chown -R sys-www-test:sys-www-test /var/svn/sys-www-test

Setting up the SSH private/public key pair to talk to the SVN repository server

We'll need to create an SSH key that can be used on our SVN server. You may wish to use a slightly larger RSA key (3200 bits or 4096 bits) if you're working on an extra sensitive server. But a key size of 2048 bits should be secure for another decade for this purpose.

# cd /root/
# mkdir .ssh
# chcon -R -t ssh_home_t ~/.ssh
# chmod 700 .ssh
# cd .ssh
# /usr/bin/ssh-keygen -N '' -C 'svn key for root@hostname' -t rsa -b 2048 -f ~/.ssh/svn-fsvs-ssh-key
# chmod 600 *
# cat svn-fsvs-ssh-key.pub

Then configure a SSH config file to point at the proper port and identity file that you just created.

# vi /root/.ssh/config
Host svn.example.com
Port 22
User sys-www-test
IdentityFile /root/.ssh/svn-fsvs-ssh-key
# chmod 600 *

Back to the Subversion repository server

Back on the SVN server, you'll need to finish configuration of the user that will add files to the SVN repository.

# su username
$ cd ~/
$ mkdir .ssh
$ chmod 700 .ssh
$ cd .ssh
$ cat >> authorized_keys
(paste in the SSH key from the other server)
$ chmod 600 *

Now you'll want to prepend the following in front of the key line in the authorized_keys file.

command="/usr/bin/svnserve -t -r /var/svn",no-agent-forwarding,no-pty,no-port-forwarding,no-X11-forwarding

That ensures (mostly) that the key can only be used to run the svnserve command and that it can't be used to access a command shell on the SVN server. Test the configuration back on the original server by issuing the "svn info URL" command. Alternately, you can try to ssh to the SVN repository server. Errors will usually either be logged in /var/log/secure on the source server or in the same log file on the SVN repository server. Here's an example of a successful connection:

# ssh svn.example.com
( success ( 2 2 ( ) ( edit-pipeline svndiff1 absent-entries commit-revprops depth log-revprops partial-replay ) ) )

This shows that they key is running the "svnserve" command automatically.

Connect the system to the SVN repository

The very first command that you'll need to issue for FSVS is the "urls" (or "initialize") command. This tells FSVS what repository will be used to store the files.

# cd /
# mkdir /var/spool/fsvs
# mkdir /etc/fsvs/
# fsvs urls svn+ssh://svn.example.com/sys-www-test/

You may see the following error, which means you need to create the /var/spool/fsvs folder, then reissue the fsvs urls command.

stat() of waa-path "/var/spool/fsvs/" failed. Does your local WAA storage area exist?

The following error means that you forgot to create the /etc/fsvs/ folder.

Cannot write to the FSVS_CONF path "/etc/fsvs/".

Configure ignore patterns and doing the base check-in

When constructing ignore patterns, generally work on adding a few directories at a time to the SVN repository. Everyone has different directories that they won't want to version, so you'll need to tailor the following to match your configuration. However, I generally recommend starting with the following (this is the output from "fsvs ignore dump", which you can pipe into a file, edit, then pipe back into "fsvs ignore load"):

group:ignore,./bin/
group:ignore,./dev/
group:ignore,./etc/fsvs/
group:ignore,./etc/gconf/
group:ignore,./etc/gdm/
group:ignore,./home/
group:ignore,./lib/
group:ignore,./lib64/
group:ignore,./lost+found
group:ignore,./mnt/
group:ignore,./proc/
group:ignore,./sbin/
group:ignore,./selinux/
group:ignore,./srv/
group:ignore,./sys/
group:ignore,./tmp/
group:ignore,./usr/bin/
group:ignore,./usr/include/
group:ignore,./usr/kerberos/
group:ignore,./usr/lib/
group:ignore,./usr/lib64/
group:ignore,./usr/libexec/
group:ignore,./usr/local/share/man/
group:ignore,./usr/mail/
group:ignore,./usr/sbin/
group:ignore,./usr/share/
group:ignore,./usr/src/
group:ignore,./usr/tmp/
group:ignore,./usr/X11R6/
group:ignore,./var/cache/
group:ignore,./var/lib/clamav
group:ignore,./var/lib/mlocate/
group:ignore,./var/lib/ntp/
group:ignore,./var/lib/php/session
group:ignore,./var/lib/postfix/
group:ignore,/var/lib/rpm/
group:ignore,./var/lock/
group:ignore,./var/log/
group:ignore,./var/mail/
group:ignore,./var/opt/compaq/locks/
group:ignore,./var/run/
group:ignore,./var/spool/
group:ignore,./var/tmp/
group:ignore,./var/vmail

Then you'll either want to ignore (or encrypt) the SSH key files.

# cd /
# fsvs ignore group:ignore,./root/.ssh
# fsvs ignore group:ignore,./etc/ssh/shadow*
# fsvs ignore group:ignore,./etc/ssh/ssh_host_key
# fsvs ignore group:ignore,./etc/ssh/ssh_host_dsa_key
# fsvs ignore group:ignore,./etc/ssh/ssh_host_rsa_key

You can check what FSVS is going to version by using the "fsvs status pathname" command (such as "fsvs status /etc"). Once you are happy with the selection in a particular path, you can do the following command:

# fsvs ci -m "base check-in" /etc

Repeat this for the various top level trees until you have checked everything in. Then you should do one last check-in at the root level that catches anything you might have missed.

Wednesday, October 12, 2011

GNU parted - GTP partition editor

Personally, I dislike "parted" because it works differently then all of the older utilities like fdisk and sfdisk.  Under the old system you could easily "fdisk -l /dev/some/device" and get a sensible reply.  If you try something similar with GNU's parted like "parted -l /dev/some/device", you get a listing of *all* of the devices on the system rather then just what you want to look at.  The syntax is instead "parted /dev/some/device print", and the device name has to come *first* in that command, otherwise it will complain about "print" not being a device.

So, you have a big disk, and you need to partition it, first let's look at the surviving disk in the Software RAID array:

# parted /dev/sde print

Model: ATA Hitachi HDS72202 (scsi)
Disk /dev/sde: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      17.4kB  2000GB  2000GB               primary  raid

Information: Don't forget to update /etc/fstab, if necessary. 

Now, let's look at the new disk:

# parted /dev/sdd print
Error: Unable to open /dev/sdd - unrecognised disk label.

Makes a lot of sense, doesn't it?  Real helpful! Good old GNU parted refuses to give us information about an entirely blank disk, not even helpful information like size (which "fdisk -l" will give us).

So go ahead and label that blank drive with a GPT disk label:

# parted /dev/sdd mklabel gpt


Now we'll actually get good information out of GNU parted's print command:

# parted /dev/sdd print     

Model: ATA ST2000VX002-1AH1 (scsi)
Disk /dev/sdd: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

Well, that's a start (but why couldn't it tell us that before?).

# parted /dev/sdd
(parted) mkpart primary 1 -1
(parted) set 1 raid on

# parted /dev/sdd print

Model: ATA ST2000VX002-1AH1 (scsi)
Disk /dev/sdd: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      17.4kB  2000GB  2000GB               primary  raid


 

Tuesday, July 26, 2011

RHEL: yum update - rpmforge causes segmentation fault

So, rpmforge is throwing errors today that look like:

# yum info rpmforge-release
Loaded plugins: rhnplugin, security
rpmforge   | 1.1 kB     00:00     
rpmforge: [####    ] 471/10722Segmentation fault

This happens any time that you do a 'yum check-update', 'yum update', 'yum search packagename', etc. The issue also happens with rpmforge-extras on this machine (if I disable the rpmforge repository). Disabling the rpmforge and rpmforge-extras repositories does temporarily fix the issue, but it means you cannot get updates from rpmforge / rpmforge-extras until it is fixed.

The following will not fix the issue (nor will most of the suggestions at the end):

# yum clean all
# yum makecache

Unfortunately, all of the search results are turning up either link-farms or issues not related to the yum software. But I'm still looking for answers.  (See the end of the post for the latest information.)

Suggestion #1: /var/lib/rpm/__db.* (from Jan 2007)

# cd /var/lib/rpm/
# rm __db.*
(that's two underscores)
# rpm --rebuilddb

Unfortunately, it didn't fix this particular issue.

Suggestion #2: /var/cache/yum/

Remove the rpmforge and rpmforge-extras directory, then have yum rebuild its cache (using either "yum update" or "yum makecache"). No luck here either for this particular issue.

Suggestion #3: uninstall / reinstall rpmforge-release RPM

# rpm -e rpmforge-release
# rpm -i rpmforge-release-0.5.2-2.el6.rf.*.rpm

No real luck there either.

Suggestion #4: Related to a recent "yum update"

As others here have noted, there was a large "yum update" (for the 5.7 release) this week on our RHEL5 server. An update which has not yet reached our CentOS 5 boxes, which are not misbehaving. Looking at the list of what was updated, there were 142 packages in this week's update. Some of the possibly interesting ones to this issue might be:

yum-security-1.1.16-16.el5.noarch.rpm
rhnsd-4.7.0-10.el5.x86_64.rpm
rhn-check-0.4.20-56.el5.noarch.rpm
authconfig-gtk-5.3.21-7.el5.x86_64.rpm
zlib-1.2.3-4.el5.i386.rpm
zlib-1.2.3-4.el5.x86_64.rpm
redhat-release-5Server-5.7.0.3.x86_64.rpm
yum-rhn-plugin-0.5.4-22.el5.noarch.rpm
rhnlib-2.5.22-6.el5.noarch.rpm
zlib-devel-1.2.3-4.el5.i386.rpm
zlib-devel-1.2.3-4.el5.x86_64.rpm
rhn-setup-0.4.20-56.el5.noarch.rpm
yum.noarch 0:3.2.22-37.el5

The list of packages that rpmforge-release depends on:

# rpm -qR rpmforge-release
/bin/sh  
config(rpmforge-release) = 0.5.1-1.el5.rf
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
The packages that yum depends on:
# rpm -qR yum
/usr/bin/python  
config(yum) = 3.2.22-33.el5.centos
python >= 2.4
python(abi) = 2.4
python-elementtree  
python-iniparse  
python-sqlite  
rpm >= 0:4.4.2
rpm-python  
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(VersionedDependencies) <= 3.0.3-1
urlgrabber >= 3.1.0
yum-fastestmirror  
yum-metadata-parser >= 1.1.0
Off-hand, and since the compression library has caused issues in the past, I'm thinking maybe an issue with zlib. Update: Someone has posted a stack trace below in the comments which points to an issue other then zlib. A bunch of gcc / glibc files were also updated on my server today:
gcc.x86_64 0:4.1.2-51.el5
gcc-c++.x86_64 0:4.1.2-51.el5
gcc-gfortran.x86_64 0:4.1.2-51.el5
gdb.x86_64 0:7.0.1-37.el5
glibc.i686 0:2.5-65
glibc.x86_64 0:2.5-65
glibc-common.x86_64 0:2.5-65
glibc-devel.i386 0:2.5-65
glibc-devel.x86_64 0:2.5-65
glibc-headers.x86_64 0:2.5-65

Suggestion #5 or Update #5:

Looks like a bug.  Reports on the CentOS-Users and RPMForge user's mailing lists are indicating that they are starting to get to the bottom of the issue. There's an issue with a package in the RPMForge repository with strange/incorrect metadata and that's causing bad things to happen in yum.

See Bug 725798 - yum segmentation fault on rpmforge repository over at Red Hat's bug tracker.

Latest posts from the mailing lists indicate that both issues are being addressed today.  So you can temporarily disable rpmforge / rpmforge-extras and do your other updates, or delay all updates for a day or two until it is fixed.

Update #6:

Everything now works correctly again. You may want to run a "yum clean all" and "yum makecache" to refresh your metadata cache files.


.

Wednesday, June 01, 2011

IMAP server folder backup script

Here's a backup script that we use to backup our user's IMAP directories on the server to another server.  It makes use of "rdiff-backup" and lets us keep 27 weeks of snapshots without using up a ton of disk space.

We process the folders on a account-by-account basis, which reduces the amount of files that rdiff-backup has to keep track of and lowers the memory requirements.  It uses a brute-force method of looking for a "subscriptions" file which appears in the root of the MailDir folder for each user.  If your installation doesn't have that file, you may need to search for files like "dovecot.index" or ".Junk".

The 3rd line in the "DIRS=" statement is a little one-line perl that will randomize the list of directories.  For a backup that runs each day, processing the directories in a random order gives a better chance that all directories will eventually be backed up - even if there is a directory that sometimes causes the script to break.  If the script always went in alphabetical order, and the script always breaks at the mid-point, then the directories towards the end of the alphabet will never be backed up. If you don't want that optimization, you can simply replace that section with $SORT and the directories will be processed in alphabetical order.

BKPHOST & BKPBASE control where the files get backed up to.  The BASE argument tells the script where to find the IMAP folders on the current server.

Note 1: In order to backup to a remote system like this, it works best if you setup SSH keys and a non-admin / non-root account on the destination server with limited access.

Note 2: "rdiff-backup" works best over LAN networks.  If the transfer is aborted due to a link going down, then it will back out completely from the transaction and the folder will not actually be backed up.  If the link is too unreliable, this means that rdiff-backup might never accomplish anything at all.  One workaround is to rdiff-backup to a local directory and then rsync (with --partial) to a remote host over the unreliable link.

#!/bin/bash

FIND=/usr/bin/find
GREP=/bin/grep
RM=/bin/rm
SED=/bin/sed
SORT=/bin/sort

# source directory 
BASE="/var/vmail/"

# destination
BKPHOST=backup-host.example.com
BKPBASE="/backup/mail/vmail/"

echo ""
echo "Backup $BASE to $BKPHOST"
echo "Started at: " `date`
echo ""

# since RHEL5/CentOS5 don't have "sort -R" option to randomize,
# use the following example
# echo -e "2\n1\n3\n5\n4" | perl -MList::Util -e 'print List::Util::shuffle <>'

DIRS=`$FIND $BASE -maxdepth 3 -name subscriptions | \
    $GREP '/var/vmail' | $SED 's:^/var/vmail/::' | $SED 's:subscriptions$::' | \
    perl -MList::Util -e 'print List::Util::shuffle <>'`

# keep track of directories processed so far (debug purposes)
DCNT=0

for DIR in ${DIRS}
do
    echo ""
    echo "`date` - Backup: $DIR"

    rdiff-backup -v3 --print-statistics --create-full-path \
        /var/vmail/$DIR ${BKPHOST}::${BKPBASE}${DIR}
    rdiff-backup -v3 --force --remove-older-than 27W \
        ${BKPHOST}::${BKPBASE}${DIR}

    # the following is debug code, to stop the script after N directories
    DCNT=$(($DCNT+1))
    #echo "DCNT: $DCNT"
    #if [[ $DCNT -ge 10 ]]; then exit 0; fi
done

echo ""
echo "Backup finished at:" `date`
echo ""


Peformance for rdiff-backup is limited by the speed of the disks, then the CPU, and possibly the SSH overhead.  On older 2GHz Opteron servers, I see throughput of 4-8Mbps over a gigabit LAN.  Not that great for an initial backup, but MailDir folders have thousands of individual files.  Since most files are mail messages (text) they compress well with the SSH compression.  Things go much faster on later runs as only the changes get transferred from the host to the backup server.

A directory of about 810MB and 105,000 files took just under 11 minutes for an initial backup.  The SSH compression sped things up quite a bit because the actual transfer speed was around 4.4 megabytes per second, while the network interface never went much above 5-6 megabits per second.   That puts the net throughput at around 15-60 gigabytes per hour.

Friday, May 20, 2011

FSVS on Ubuntu 10.04 LTS (Lucid Lynx)

(See also my older post on installing FSVS on CentOS 5.5.)

While I prefer CentOS / RHEL for our servers, I do have a few Ubuntu machines laying around that I use as desktops.  And given my desire to track things using FSVS as much as possible, that means I need to install FSVS under Ubuntu as well.

Note: While you can install fsvs via apt-get with "apt-get install fsvs", the version included right now in the Ubuntu repositories is only FSVS 1.1.17.  This is fairly old code from around 2008.  The latest version is 1.2.3  and was released in January 2011.

Step 1: Create the server user and repository

On our SVN server, we'll need to setup a user account and create a repository to hold the files.  All of our repositories are kept under /var/svn and we create users and groups named "svn-sys-somesystem".  The individual system repository gets named sys-somesystem.

# cd /var/svn
# svnadmin create sys-somesystem
# chmod -R 750 sys-somesystem
# chmod -R g+s sys-somesystem/db
# useradd -m svn-sys-somesystem
# chown -R svn-sys-somesystem:svn-sys-somesystem sys-somesystem
# passwd  svn-sys-somesystem
(give it a very long, very random password)
Changing password for user svn-sys-somesystem.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
# su svn-sys-somesystem
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ cd ~/.ssh

At which point we're ready to paste the SSH key from the other system in.  Switch to the system that you will be adding FSVS to.

Step 2: Setting up SSH keys

Login to the system which you will be adding as a FSVS client.  Under Ubuntu, this means a lot of 'sudo' work.  Note that lines ending in '\' should be concatenated together to form a single command.  You'll need to create a .ssh/config file so that SSH knows how to talk to the SVN server.

$ sudo mkdir /root/.ssh
$ sudo chmod 700 /root/.ssh
$ sudo /usr/bin/ssh-keygen -N '' \
-C 'svn key for root@hostname' \
-t rsa -b 2048 -f /root/.ssh/fsvs-key
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/fsvs-key.
Your public key has been saved in /root/.ssh/fsvs-key.pub.
The key fingerprint is:
ff:ee:dd:cc:bb:aa:99:88:77:66:55:44:33:22:11:00 svn key for root@hostname
$ sudo vim /root/.ssh/config
Host svn.yoursvnserver.com
Port 22
User svn-sys-somesystem
IdentityFile /root/.ssh/fsvs-key
$ sudo chmod 600 /root/.ssh/config
$ sudo chmod 600 /root/.ssh/fsvs-key
$ sudo chmod 600 /root/.ssh/fsvs-key.pub
$ sudo cat /root/.ssh/fsvs-key.pub

Copy this key into the clipboard or send it to the SVN server or the SVN server administrator. Back on the SVN server, you'll need to finish configuration of the user that will add files to the SVN repository.

# su svn-sys-somesystem
$ cd ~/.ssh
$ cat >> ~/.ssh/authorized_keys

The line for the SSH key should start with the following, which locks down the SSH key a bit and should only allow it to be used to run /usr/bin/svnserve.

command="/usr/bin/svnserve -t -r /var/svn",no-agent-forwarding,no-pty,no-port-forwarding,no-X11-forwarding

So a full SSH key line in the authorized_keys files will end up looking like:


command="/usr/bin/svnserve -t -r /var/svn",no-agent-forwarding,no-pty,no-port-forwarding,no-X11-forwarding ssh-rsa (long SSH key) (ssh key comment)

Hit Ctrl-C when finished pasting in the key.

$ chmod 600 ~/.ssh/authorized_keys

Now we can go back to the client machine where FSVS will be installed and test that our SSH connection works.

$ sudo svn.yoursvnserver.com
The authenticity of host '[svn.yoursvnserver.com]:22 ([192.168.0.1]:22)' can't be established.
RSA key fingerprint is 99:88:77:66:55:44:66:33:22:11:00:55:ff:ee:dd:aa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[svn.yoursvnserver.com]:22,[192.168.0.1]:22' (RSA) to the list of known hosts.
PTY allocation request failed on channel 0
( success ( 2 2 ( ) ( edit-pipeline svndiff1 absent-entries commit-revprops depth log-revprops partial-replay ) ) ) Connection to svn.yoursvnserver.com closed.

If you don't get the SVN pipeline information, then the SSH keys are not configured properly, or you forgot to chmod a file back to 600 (usually the authorized_keys file).

Step 3: Installing FSVS

The FSVS install tarball is available at fsvs.tigris.org.

$ cd /usr/local/src
$ sudo wget http://download.fsvs-software.org/fsvs-1.2.3.tar.bz2
$ sudo tar xjf fsvs-1.2.3.tar.bz2
$ sudo chown -R username:username fsvs-1.2.3/
$ cd fsvs-1.2.3/

Now we are ready to configure and compile FSVS.  The following command will check the environment and tell us whether libraries are missing.

$ ./configure

Since we already know that we'll need to install a bunch of things, here is the apt-get command.  Note that if you need to find a development version of a particular package, then "apt-cache search apr | grep 'dev'" may be useful.

$ sudo apt-get update
$ sudo apt-get install build-essential
$ sudo apt-get install libpcre3-dev
$ sudo apt-get install libaprutil1-dev
$ sudo apt-get install libsvn-dev
$ sudo apt-get install libgdbm-dev

Once all that is installed, the "./configure" should run cleanly.  If it doesn't, then you're probably missing some library and will have to add it.

$ ./configure
$ make

Which will compile and link the FSVS program.

$ sudo cp src/fsvs /usr/local/sbin/
$ sudo chown root:root /usr/local/sbin/fsvs
$ sudo chmod 700 /usr/local/sbin/fsvs

Step 4: Association with the SVN repository

$ cd /
$ sudo mkdir /var/spool/fsvs
$ sudo mkdir /etc/fsvs/
$ cd /
$ sudo fsvs urls svn+ssh://svn.yoursvnserver.com/sys-somesystem/

Step 5: Telling FSVS what to ignore

When constructing ignore patterns, generally work on adding a few directories at a time to the SVN repository. Everyone has different directories that they won't want to version, so you'll need to tailor the following to match your configuration. However, I generally recommend starting with the following (this is the output from "fsvs ignore dump", which you can pipe into a file, edit, then pipe back into "fsvs ignore load"):

group:ignore,./backup/
group:ignore,./bin/
group:ignore,./cdrom/
group:ignore,./dev/
group:ignore,./etc/fsvs/
group:ignore,./etc/gconf/
group:ignore,./etc/gdm/
group:ignore,./etc/shadow*
group:ignore,./etc/ssh/ssh_host_key
group:ignore,./etc/ssh/ssh_host_dsa_key
group:ignore,./etc/ssh/ssh_host_rsa_key
group:ignore,./home/
group:ignore,./lib/
group:ignore,./lib32/
group:ignore,./lib64/
group:ignore,./lost+found
group:ignore,./media/
group:ignore,./mnt/
group:ignore,./proc/
group:ignore,./root/
group:ignore,./sbin/
group:ignore,./selinux/
group:ignore,./srv/
group:ignore,./sys/
group:ignore,./tmp/
group:ignore,./usr/bin/
group:ignore,./usr/games/
group:ignore,./usr/include/
group:ignore,./usr/lib/
group:ignore,./usr/lib32/
group:ignore,./usr/lib64/
group:ignore,./usr/local/games/
group:ignore,./usr/sbin/
group:ignore,./usr/share/
group:ignore,./usr/src/
group:ignore,./var/backups/
group:ignore,./var/cache/
group:ignore,./var/games/
group:ignore,./var/lib/
group:ignore,./var/lock/
group:ignore,./var/log/
group:ignore,./var/mail/
group:ignore,./var/opt/
group:ignore,./var/run/
group:ignore,./var/spool/
group:ignore,./var/tmp/

$ vim ~/fsvs-ignores-201105
$ sudo fsvs ignore load < ~/fsvs-ignores-201105

You can check what FSVS is going to version by using the "sudo fsvs status pathname" command (such as "fsvs status /etc"). Once you are happy with the selection in a particular path, you can do the following command:

$ sudo fsvs ci -m "base check-in" /etc

Repeat this for the various top level trees until you have checked everything in. Then you should do one last check-in at the root level that catches anything you might have missed.

Wednesday, May 18, 2011

SubVersion - splitting apart a very large repository

Back when we started using SVN in 2006, we went for ease-of-use and easy administration by putting all of our projects into a single repository.  At the time it was a few gigabytes in size and not a big deal.  Fast-forward 4 years and we're starting to wish we had split the repository up by client / project boundaries.  The tree looked like:

/A/ClientA/ProjectA1
/A/ClientA/ProjectA2
/A/ClientA2/ProjectA2A
/B/ClientB/ProjectB1
...

So my current project is to take the 18GB repository with about 13,000 revisions and split it out and re-base the paths so that the project directories are the top level of the repository.  Unfortunately, over the years, files have been copied / moved, folders have vanished / moved / been renamed, etc., so there's the potential for interesting fun.  This is made even trickier since we're doing the re-base a few levels down.

Warning: When you do a split, the default result is that the new split repositories will have the same SVN repository UUID (unique ID) as the original repository.  That is why the last step in this process is "svnadmin setuuid /path/to/new/repo".  You can see the UUID of an existing repository by using "svnlook uuid /path/to/repo".

Step 1: Raw Dump

First off, I suggest making a raw dump of the original repository, piped through 'gzip' which will make the next few steps faster.  Naturally, if anyone commits things to the old repository after this point those changes won't be migrated.  So you will want to address that issue by limiting access to the original repository, or work on the repository in sections and periodically update your raw dump to capture new changes before you start on the next section.  For our purposes, we simply said "these particular projects are off-limits until Thursday" and worked on a set of projects each week.

Note: all of the following is a single command.

# svnadmin dump --quiet /path/to/svn-repo |
gzip > /path/to/svn-raw-dump-svn-repo.may2011.dump.gz

That will create a .gz file that is about 30-50% larger then the old repository.  Our gzip'd dump file ended up at 44% larger (26GB vs 18GB).  Without gzip, the uncompressed dump file would have been a lot larger (between 5x and 6x larger then the gzip'd file).  The main benefits are that it gives you a static source to work with, shortens up the later command lines slightly, and it's easier to see how all this works if you do it bit by bit.  You'll probably want to also copy that .gz file off to permanent archival storage after this is all done. 

(bzip2 would have created a 15-20% smaller file, but it also would have taken 2x-3x longer to create the file.  As it is, the CPU was the bottleneck for creating this initial dump file and is the bottleneck in some other steps as well.)

Step 2: Filtering the dump file

This process breaks out the single project directory that we want and puts it in its own dump file.  We will repeat this command once for every project that we're breaking out to a separate repository.  We drop any empty revisions and renumber those that remain during this set.  It will renumber the revisions starting at 1 and the new file will end up with a much lower revision count.  We're not adjusting the paths within the repository during this step.


# gunzip -c /path/to/svn-raw-dump-svn-repo.may2011.dump.gz |
svndumpfilter include --quiet --drop-empty-revs  --renumber-revs 
A/ClientA/ProjectA1 > /var/svn/svn-raw-ClientA-ProjectA1.dump

Notes:
- Leave the leading '/' off of the path that you want to include.
- Leave the trailing '/' off of the path that you want to include.

Running a search on the new dump file reveals the new revision numbers.

# grep 'Revision-number' /var/svn/svn-raw-ClientA-ProjectA1.dump
...
Revision-number: 60
Revision-number: 61
Revision-number: 62

Note that if you attempt to load the project-specific dump file into a new repository at this point it will fail.  That is because the parent directories do not exist in the repository that you are loading into.  But if you create those parent folders, you can then import the dump file into the new repository at this point.  I suggest creating a new scratch repository with "svnadmin create /var/svn/ProjectA-Test1", create the necessary parent folders, then do a "svnadmin load /path/to/repo < /path/to/dump" to verify that you understand this step.


Step 3: Re-basing the project

Note: Depending on how many folder renames are in your original repository, you may have lots of trouble with the following.  In which case you should skip this and just load the dump file into the new repository without re-basing the paths.  Don't forget to change the UUID on the new repositories after loading.

The next step is to move A/ClientA/ProjectA1 back to the root of the repository during the import process.  We will do this by editing the dump file with 'sed' before loading it back in.  In the dump file, there are two types of lines that contain path information.  One starts with 'Node-path:' and the other starts with 'Node-copyfrom-path:'.  This is how 'svnadmin load' keeps track of what goes where in the repository tree.

# grep '^Node-path:' /var/svn/svn-raw-ClientA-ProjectA1.dump
Node-path: A/ClientA/ProjectA1
Node-path: A/ClientA/ProjectA1/Data
Node-path: A/ClientA/ProjectA1/Doc
Node-path: A/ClientA/ProjectA1/Trunk
...
# grep '^Node-copyfrom-path:' /var/svn/svn-raw-ClientA-ProjectA1.dump


Notes:
- There is never a leading slash ('/') and never a trailing slash ('/').
- The Node-path: argument cannot be empty.
- The parent directory must already exist in the SVN repository in order for a load to succeed.  So in order to load the above node paths, you would have to manually create the "A/ClientA" directory tree first.


As stated, we can use 'sed' to transform these path names on the fly.  And the following set of lines is all a single command.

# cat /var/svn/svn-raw-ClientA-ProjectA1.dump |
sed 's/Node-path: A\/ClientA\//Node-path: /' |
sed 's/Node-copyfrom-path: A\/ClientA\//Node-copyfrom-path: /' >
/var/svn/svn-newbase-ClientA-ProjectA1.dump

So if a line reads "Node-path: A/ClientA/ProjectA1" in the input, it will look like "Node-path: ProjectA1" in the output.

Now you can load this into the new repository.

# svnadmin load /path/to/new/repo < /var/svn/svn-newbase-ClientA-ProjectA1.dump


Step 4: Changing the UUID of the new repository

As I mentioned before, when you do a split like this, the repository UUID will end up as the UUID of the original repository after the "svnadmin load" step.  You can verify this behavior using the "svnlook uuid /path/to/repo" command.  You can change the UUID manually, or just have a new one assigned automatically with the "svnadmin setuuid" command.

# svnadmin setuuid /path/to/new/repo

Step 5: Verify the new repository, make backups

After you load the new repository, take an hour and verify that all of the project folders made it intact and that the version history is intact.  Then make a backup of the new repository.