Saturday, February 27, 2016

Simple usage example of jquery.inputmask by Robin Herbots

We're developing an application at work where the backend system is very limited on what characters it will accept over the wire as well as issues with various fields having to be in a specific format.  While we are validating everything on the backend system before we accept it, I believe that it is also useful to provide guidance to the user while they are entering data into the fields via input masks.

The tool of choice for any website that is using jQuery would be Robin Herbots' jquery.inputmask plugin.  It is extremely powerful, still actively developed, and handles cases you may not have thought of yet.

After playing with this for a few days, I'm going to try to distill down to a very simple way of using the plugin in an application, without having to wire up each individual input element on the page.

Installation


You will need to reference three JavaScript files in your HTML page.  These <script> tags should be towards the bottom of the page, as is standard practice.  These should probably be referenced in the following order.


1) jQuery library (such as "jquery.min.js")


You can host it either locally on your web server or use the Google CDN (ajax.googleapis.com).


2) jQuery InputMask 3.x plugin (jquery.inputmask.bundle.js)


If the size of the bundle is too large for your tastes, then you can look at the InputMask documentation for other versions to use.


3) jQuery InputMask 3.x binding plugin (inputmask.binding.js from the extra/binding/ folder)


Binding


Place the following JavaScript at the bottom of the page inside either a separate .js file or embedded on the page in a <script> tag.


$(document).ready(function() {

  $(":input[data-inputmask-mask]").inputmask();

  $(":input[data-inputmask-alias]").inputmask();

  $(":input[data-inputmask-regex]").inputmask("Regex");

});


This wires up the specific attributes on the <input> tag.

Usage


By doing it this way, using the input masks, aliases and regex is very easy to do on individual <input> elements.


1) A US-centric date field that takes "mm/dd/yyyy"


This uses a standard alias called "mm/dd/yyyy", which handles things like only allowing 1-12 for the month number, 1-31 for the day number, and treats 2-digit year entry as "20xx".


<input class="form-control col-md-3" data-inputmask-alias="mm/dd/yyyy" data-val="true" data-val-required="Required" id="DATE" name="DATE" placeholder="mm/dd/yyyy" type="text" value="" />


2) US-centric name field that only allows A-Z, 0-9 and some other characters via a regular expression.  With a maximum length of 25 characters.


<input class="form-control col-md-4" data-inputmask-regex="[A-Za-z0-9 .,'-]+" maxlength="25" data-val="true" data-val-required="Required" id="NAME" name="NAME" placeholder="NAME" type="text" value="" />


Note: As mentioned previously, the backend system that we are storing data in is extremely limited in terms of what characters it can accept.  Accented characters such as Spanish or Icelandic names would play havoc, so we have to prevent their entry on the web page.  It sucks, but we're not in control of the backend system on this project.


3) US-centric phone field as "(999)999-9999"


<input class="form-control col-md-4" data-inputmask-mask="(999)999-9999" data-val="true" data-val-required="Required" id="PHONE" name="PHONE" placeholder="PHONE" type="text" value="" />


Note: There are far better aliases for handling phone numbers.  This particular backend system only allows 10 digit phone numbers, without a country code.  Take a look at the 'phone' alias for normal usage.


4) Input field that only accepts A-Z, a-z and 0-9, minimum of 10 characters, maximum of 20.


<input class="form-control col-md-4" data-inputmask-mask="*{10,20}" data-val="true" data-val-required="Required" id="HICN" name="HICN" placeholder="HICN" type="text" value="" />


JSFiddle


The JSFiddle for this code can be found at:


http://jsfiddle.net/ThomasH/tx078u0a/





Wednesday, December 30, 2015

Ubuntu and software RAID, getting a device path that won't change from boot to boot

While I love mdadm (software RAID), it's perplexing me at the moment as it keeps changing its device number under Ubuntu.  When I created the array, I created it as "md100", but whenever I restart it ends up as "md127" (and could end up as something else!).  Normally, this doesn't matter, but I'm doing LVM on LUKS, so I need a static (unchanging) path to the array device.

This is a (4) disk array, running mdadm's raid10, and was created with the command:

$ sudo mdadm --create /dev/md100 --raid-devices=3 --spare-devices=1 --level=raid10 /dev/sd[bcde]1

After creating the array, I can check the details with:

$ sudo mdadm --detail --scan /dev/md100

ARRAY /dev/md127 metadata=1.2 spares=1 name=freya:100 UUID=deafbeef:deadbeef:beafdeff:beaffffa

Notice the "name=freya:100".  That's the key to finding a static path to the array.  If I then look under the /dev/md directory, I will see:

$ ls -l /dev/md
total 0
lrwxrwxrwx 1 root root 8 Dec 30 08:15 freya:100 -> ../md127

That means the static path to this array is "/dev/md/freya\:100" and I can use LUKS format on it with:

$ sudo cryptsetup -y -v luksFormat /dev/md/freya\:100

Alternate, I can search for the UUID in the /dev directory and find:

$ sudo find /dev -name '*deafbeef*'
/dev/disk/by-id/md-uuid-deafbeef:deadbeef:beafdeff:beaffffa

I can then add a LUKS keyfile to the device and unlock the device at boot by listing it in the /etc/crypttab file.  Either path will work, but the colons (:) will likely have to be escaped in /etc/crypttab.

PS: Yes I've tried putting the array line in the /etc/mdadm/mdadm.conf file as just "ARRAY /dev/md100 UUID=deafbeef:deadbeef:beafdeff:beaffffa", which is supposed to fix the issue.

PS #2: It's interesting that the mdadm UUID appears in /dev/disk/by-id and not /dev/disk/by-uuid.


Thursday, December 10, 2015

Installing borgbackup under Ubuntu Gnome

My favorite file-level backup tool for Linux (or OS X or Cygwin) is still borg (a.k.a. borgbackup).  The features that I rely on are:
  • Efficiency when dealing with millions of files, borg is very fast at scanning the file system and figuring out what needs to be backed up.  In the past, I've run it against an IMAP mail server file system with a few million files and about 100GB.  Each snapshot would only take 15-20 minutes instead of a few hours for some other solutions.
  • Very few files created on the target file system.  Other solutions like rsnapshot or rdiff-backup will turn 1 million source files into 10 million backup files (or worse) due to how they implement snapshots.  While it's useful to be able to browse the backup directory just like the source file system, it causes all sorts of issues for disk performance or copying backup directories off to removable media.  In contrast, borg creates only a few dozen or few hundred files per snapshot.
  • Deduplication using variable block sizes.  This is a huge win if you have a lot of files where parts of the files are identical.  The algorithm in borgbackup will find those identical sections and only store them once in the backup repository.
  • Efficiency over the network.  For the most part, as long as borgbackup is isntalled on both the source and destination systems, borg is very good at sending the least amount of traffic over the wire. With variable block deduplication, it's going to be more efficient then other file level deduplication solutions.
  • Built-in client-side encryption.  While I don't use this (my backups are stored on LUKS encrypted file systems), this could be useful if you are backing up to a destination server that you do not trust 100%.
  • Compression of backup data prior to transmission to the repository server.  This also helps reduce the size of the repository on the target server.
  • Works over SSH (as does rsync, rsnapshot, rdiff-backup).

Monday, September 28, 2015

Office365/ExQuilla dropping spaces / line breaks in messages

Since mid-September, we have been hunting a problem that occurs for our Thunderbird users when they attempt to send email via the Office 365 mail server.  It seems to be limited to just those using the ExQuilla plug-in, which gives access to messages/contacts on the Exchange server.

The symptoms are that in the body of the email, words will be run together without spaces between them.  It's not every word, but rather every 10th or 12th word where the spaces will be dropped.  The underlying cause of this seems to be a known issue at ExQuilla.

https://exquilla.zendesk.com/entries/83693609-Office365-removes-line-breaks-from-sent-emails

Kent James
posted this on Sep 23, 10:57
Beginning about September 17, 2015, we started receiving reports from ExQuilla users who used Office365 as a server, that sent emails have all line breaks removed, which appears as an issue to both the recipient as well as in the Sent Items folder.
You can also see this bug in action if you go to your "Sent Items" folder in Thunderbird, under your account that uses the ExQuilla plug-in.  Messages which were affected by the bug will have spaces missing in the copy that was saved to the "Sent Items" folder.

Now, other users have suggested a few workarounds to the issue, some of which are a bit tricky to implement, or require action to be taken on every single sent message.

I have an alternative fix:


  1. Open up Thunderbird.
  2. Go to the "Options" dialog.
  3. Find the "Composition" tab.
  4. Go to the "General" tab under "Composition".
  5. Click on the "Send Options..." button.



In this dialog, you need to make sure that two things are done:

  1. Make sure that the "Text Format" option says "Send the message in both plain text and HTML".
  2. Under the "Plain Text Domains" tab, add an entry for "*.*".

After that, you should be free of the line-break bug that was introduced to the Office 365 environment.

Update 2015-10-01:  The problem has supposedly been fixed on the Microsoft side.  I'll have to do some testing to see whether it's gone for us.

Update #2 2015-09-28
Another user has now reported that a problem which used to exist has now disappeared due to a server update on Office365. If you are experiencing this issue, it will probably go away on its own when Microsoft updates the server.




Friday, September 18, 2015

Changing how Linux Mint identifies network devices

I've been running Linux Mint 17.2 (mostly) happily on my old 2007 Thinkpad T61p.  However, even with 8GB of RAM and a SSD, it was a bit too sluggish for my tastes.  So I purchased a used T530 which is about 5-6 years newer and has an i7 CPU.

Swapping the SSD from one unit to the other was easy, and Linux Mint booted right up.  But I couldn't get a network connection (wired or wireless).

Troubleshooting step #1 was to boot up the Linux Mint 17.2 DVD and see whether it could see the network.  It saw both the wired ethernet port as well as the Thinkpad's WiFi.  So that indicated that there was no problem with the hardware and that Mint does support the Thinkpad network chips out-of-the-box.

Rebooted into the installed operating system.  I poked around a bit, then decided to just search /etc for any files containing 'eth0'.

# find /etc -type f -exec grep -H 'eth0' '{}' \;

That quickly lead me to the /etc/udev/rules.d/70-persistent-net.rules file.  Inside of there are lines that map the device (using the MAC) to names like 'eth0' 'eth1'.  Because this was a drive swap from the old to the new laptop, the old laptop's ethernet/wifi ports were in there already as eth0/wlan0.  Linux Mint then added new lines for the devices on the new laptop as eth1/wlan1.

The simple fix was to comment out the old SUBSYSTEM= lines, then change the NAME="" strings to be eth0 and wlan0.

After a reboot, everything worked as expected, without having to change anything else.

Sunday, September 06, 2015

Windows 7 SP1 Windows Update takes forever to search for updates

One of the difficulties that I've run into while setting up the VM for Win7 on my Linux Mint laptop is that the Windows Update service will take hours/days/forever to figure out what updates are needed in a fresh Windows 7 Service Pack 1 install.

The symptoms are:

  • Windows Update is stuck on "Checking for updates..."
  • High memory usage by "svchost.exe" (2+ GB)
  • High CPU usage by "svchost.exe" (maxes out one of the CPU cores)
  • The process ID of "svcshost.exe" matches that of "wuauserv"

The key to diagnosing this is to use "Task Manager" and adjust which columns get displayed so that you  can see the process ID (PID) of the runaway "svchost.exe" process.  If that PID matches the one used by "Windows Update" on the "Services" tab, then this is probably affecting you.

The first step is to install a newer version of the Windows Update client.  The one from August 2015 is available at the following link:

Windows Update Client for Windows 7 (Aug 2015)

The second step is to reboot and let the machine sit at the "Checking for Updates" status in the "Windows Update" application for 24-72 hours  Patience is key here as it will eventually figure out that it needs to download 200+ patches.

Once that process starts, it will probably manage to download everything, but will hang on #141.  You can power off the machine and back on at this point, but you're probably just going to have to let it sit for another 12-36 hours.

And once you do finally get the system patched, you should take an image of it (Acronis True Image, Norton Ghost, etc.) so that you can reset back to this point if the machine ever gets corrupted.  That way you can skip installing all of those updates again.

Sunday, August 30, 2015

Switching Linux Mint on Thinkpad T61p

I've been wanting to try and switch to Linux full-time on my home desktop/laptop machines for a while, and the amount of spyware / tracking / report back in Windows 10 is pushing me to make a real effort this year.  So, I'm taking my old Thinkpad T61p and putting Linux Mint 17.2 on it.  My requirements were:

  • Full disk encryption
  • Virtualization for a Win7 or Win10 guest
  • Running as much as possible inside of Linux
  • 64bit OS
A lot of my day-to-day software is already multi-platform or open-source.  For instance:
  • Mozilla Firefox and Google Chrome for web browsing
  • Mozilla Thunderbird for email
  • SVN or git for version control
  • GPG for encryption
  • Pidgin for instant messaging and IRC
  • Audacity, Handbrake, VLC for multimedia work
  • About half of my Steam games are Linux / SteamOS
  • OpenVPN
  • pgAdmin III
  • Synergy
The troublesome applications will be:
  • Microsoft Office is iffy under WINE
  • Microsoft Access does not run under WINE
  • Skype might be a problem
  • ODBC connections for use with MSAccess
  • The half of my Steam games that are not cross-platform
  • Cisco AnyConnect VPN client for access to my virtual servers
Because I'm doing this on my old laptop (until the T550p comes out later this year), I'm not too worried about video games that won't run under Steam for Linux.  The only games that this old Core2Duo system will support are the lighterweight / older games that don't need much CPU or video graphics performance.

Sunday, July 26, 2015

Installing atticmatic/borgmatic on Cygwin

There's a wrapper project for Attic / Borg backup called "Atticmatic" on GitHub.  It helps simplify the process of doing daily backups using attic/borg.

Packages needed on Cygwin (in addition to those needed for attic/borg):

  • murcurial (hg)
Creation of the SSH key (assumes that you have the 'openssh' package installed):
  1. mkdir ~/.ssh
  2. chmod 700 ~/.ssh
  3. ssh-keygen.exe -t rsa -b 4096 -N '' -C 'Backup Key 2015' -f ~/.ssh/ssh-backup
  4. chmod 600 ~/.ssh
Then create a ~/.ssh/config file to point at the server:


Host backups.example.com
HostName realservername.example.com
Port 22
User usernameonbackupserver
IdentityFile ~/.ssh/ssh-backup

Add the new public key to the remote server, to the ~/.ssh/authorized_keys file for the 'usernameonbackupserver' account.

Now test that you can login to the remote server, using "ssh backups.example.com" (use the "Host" line entry from ~/.ssh/config).  And make sure that you can create files/directories in the location where you want to store the backups.

Make sure that your Attic/Borg backup has been initialized.

$ borg init ssh://backups.example.com/path/to/borg/directory/borgmatic.borg

Now you'll need to create an empty /etc/borgmatic/exclude file and edit the sample /etc/borgmatic/config file.  Once those files are setup, you can run "borgmatic" (or "borgmatic -v 1" to see details) on a regular basis.


Thursday, July 16, 2015

SSH keygen under Cygwin

You will need to install the "openssh" package using the Cygwin installer before doing this.

Notes:

Typical steps for creating SSH keys:

$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ cd ~/.ssh
$ ssh-keygen.exe -t rsa -b 3200 -C 'Borg backup key thomast61p July 2015' -N '' -f ~/.ssh/ssh-borg-backup-july2015

In this particular case, I am creating a RSA/3200 key with no password (-N '') and with a comment indicating that it will only be used for Borg backups.  Because the key has no password, I should only use it in conjunction with the borg backup command on a separate server-side account that has very limited permissions on the server.

Saturday, July 11, 2015

Linux KVM shutting down all virtual guests

On my current virtualization server running Linux KVM (QEMU), I want to shutdown all guests so that I can unmount the file system containing the VM image files and make it larger.

#1 - See what guests are running

# virsh list
 Id    Name                           State
----------------------------------------------------
 1     dc1                            running
 3     cfmc87                         running
 4     win7c                          running

#2 - Use the libvirt-guests service to suspend all of them.

# service libvirt-guests stop

Running guests on default URI: dc1, cfmc87, win7c

Suspending guests on default URI...
Suspending dc1: ...
Suspending dc1: done
Suspending cfmc87: ...
Suspending cfmc87: ...
Suspending cfmc87: done
Suspending win7c: ...
error: Failed to save domain bb0d169d-a373-544e-a0f7-99e338673177 state
error: internal error unable to execute QEMU command 'migrate': An undefined error has ocurred

#3 Dealing with troublesome guests

Oops, looks like win7c is being difficult.  So let's just force it down.

# virsh shutdown win7c
Domain win7c is being shutdown
# virsh list      
 Id    Name                           State
----------------------------------------------------
 4     win7c                          running

Even with issuing the shutdown command and waiting a few minutes, the "win7c" guest is still running.  So we'll have to "destroy" the instance.

# virsh destroy win7c 
Domain win7c destroyed

#4 Checking that all files are released

Now I can use the "lsof" command to verify that there are no open files on the mount point.

# lsof /srv/vms
(returns nothing)

#5 Umount, fsck, resize, fsck, remount

The VM image LV is stored on /dev/md127, inside a LVM thin pool.  It's currently 200GB and is about 70% full, so I want to add another 100GB.

# umount /srv/vms
# fsck -f /dev/vg10/vms
# lvextend -L+100G /dev/vg10/vms
# resize2fs /dev/vg10/vms
# fsck -f /dev/vg10/vms
# mount /srv/vms

#6 Restart the guests

# service libvirt-guests restart
# virsh list

And now my /srv/vms filesystem is no longer having space issues.

Wednesday, July 08, 2015

Installing borg backup (fork of Attic) on Cygwin (Windows)

I've used Cygwin with rdiff-backup before to backup a Windows box to a backend Linux server over SSH, but given the success that I've had with Attic backup, I'm going to try this with the fork of Attic which is called "Borg".

Step #1 - Download and install Cygwin.  The following packages that need to be installed before you can install borg backup.

binutils (not sure)
gcc-g++
libuuid-devel (not sure)
openssl-devel
python3
python3-setuputils
wget

Step #2 is to install "pip" inside the Cygwin environment.

$ cd ~/
$ mkdir -p ~/downloads/python
$ cd ~/downloads/python
$ wget https://bootstrap.pypa.io/get-pip.py
$ python3 get-pip.py
$ pip3 --version

The "pip" command should return a version number string if it is installed.

Step #3

$ pip3 install borgbackup

Step #4

At this point, if you type "borg" at the command prompt, you should see information about Borg such as the version and list of commands.

Tuesday, July 07, 2015

pfSense RRD graphs for NTP - system jitter vs clock jitter

Since installing pfSense and setting up the NTP server, I've been wondering for a while what the difference is between "System Jitter" (sjit) and "Clock Jitter" (cjit) in the RRD graphs.  For instance, in the following graph, we can see that the system jitter value has gone way up.


So what causes system jitter? Looking at the Status -> NTP page gives us a bit of a clue.


At least two of our upstream time servers have huge jitter numbers, 20-40ms, while all of the other upstream time clocks are reporting jitter of less than 1ms.  So something is happening to those two time servers, or a common route between us and those time servers.

That leads me to believe that the green line in the NTP graphs (sjit) indicates how much jitter there is between our NTP server and the upstream servers.  In general, it probably means there is congestion between us and that server and that packets are arriving out of order or late.

Saturday, June 27, 2015

Inexpensive and power efficient refurb PCs for firewalls

This is a follow-up on my earlier post about the Lenovo M58p (Intel Core2 Duo E8400 @ 3GHz) that I'm using for my home firewall.  It clocks in at 38-40W idle and 50-60W under load (which is rare).

If you go to NewEgg's site and go into the Desktop Computers category, you can find all sorts of refurbished boxes for under $150.

So what sort of CPUs are there and how do they stack up in terms of TDP and idle TDP?

Well, according to various charts on the net:

Intel Core 2 Duo E6600 and E6750 should both draw less power then the E8400 that I'm using.  At a guess, that would put total power draw closer to 30W idle rather then 40W.  The Core2Duo line also performs better then the old PentiumD and Pentium4 chips when under load.

So Core2Duo is better then a Pentium D or Pentium 4 chip, and the slower E6600/E6750 C2D are better then the 3GHz E8400.

The i3/i5 series (Sandy Bridge or later) should offer lower power-draw then the older Core2Duo chips.  Power draw dropped again slightly with Ivy Bridge and Haswell.

That being said, the least expensive i3 refurb is $220-$250, while a Core2Duo unit can be found for $80 or less.  So I think until the i3 units start being retired in another 2 years, the C2D units are going to be the best choice.

Monday, June 22, 2015

Using aliases pfSense to create rules for protocols with multiple port ranges

File this one under "things I wish I had known sooner".  When setting up pfSense firewall rules on an interface, you'll run into protocols which have multiple ports that are not in a contiguous range.  One example of this is the common web server (HTTP) ports of 80, 443 and 8080-8081.

This leaves you with two options.

  1. Setup multiple rules.  This is the best option because you only specify the exact ports that you want, with no extras thrown in.  The downside is that for some protocols, you will end up with multiple rules that have to be maintained.
  2. Specify a rule with a broad port range.  Which is sort of okay if you are only allowing a handful of extra ports, but it is not ideal.
Enter the concept of aliases (under the Firewall -> Aliases menu) in the pfSense web UI.  Here you can create an alias which lists out all of the ports associated with a particular protocol.


After creating the alias, you then create or edit a rule and use that alias in any fields with a red background.  Such as the destination port field.


After clicking the "Save" button, rules that are using port aliases will show up in the rule list looking like:


Needless to say, that can make your life much easier when maintaining large lists of ports as long as all of the ports in question are using the same protocols.

Mail client ports (IMAP/POP3/SMTP) are also good candidates for an alias rule.  One caution is to never allow 25/tcp to egress your network, only your mail server in the DMZ should be allowed to contact other mail servers via port 25.  Every internal client should be forced to either use tcp/465 (SMTP/SSL) or tcp/587 (SMTP Submission) or route their SMTP traffic through your mail server.



Saturday, June 20, 2015

Using badblocks to prepare an offsite USB backup drive

Part of my backup strategy is to write my backups to external USB drives which are protected by LUKS encryption.  However, before I will put a drive into service, I like to heavily test any mechanical drive for a few days to see whether it will hold up to the wear-and-tear of being a portable drive.

(There's little or no point in doing this on a SSD.)

Currently, my preferred method is to use "badblocks" in destructive write-testing mode to test the drive.  For example:

# badblocks -p 3 -wsv -t random /dev/disk/by-id/usb-SAMSUNG_HM502JX_C######-0\:0

The "-p 3" tells badblocks that the drive has to survive (3) passes without finding any new bad sectors before badblocks will stop running.  Most modern mechanical hard drives have spare sectors that can be used when a bad spot is located on the surface.  By repeatedly writing to bad or dying parts of the drive surface, we can force the drive's firmware to remap those failing areas to the spare sectors.

The downside of "-p 3" is that this increases the amount of time needed to test the drive before placing it into service.  A rough estimate is that a 1TB drive over USB 2.0 will require 3-4 days of testing with "-p 3".  If you are using a USB 3.0 drive and it is hooked up to a USB 3.0 port, then it might only take 20-30 hours to test.

The "-wsv" tells badblocks to do write-testing (which destroys all data on the drive), as well as giving status output and being verbose about what it is doing.

The "-t random" specifies that we want to use a random pattern for the test.  Please note that this is not a suitable replacement for "shred" when wiping a disk or preparing it for LUKS.  You should still run "shred" on the drive prior to using it (or giving it away to someone else).

Drives that have started to fail will often sound like they are seeking with big pauses during the badblocks write pass.  If you are seeing big pauses in reads/writes from a drive during testing, it's possible that the disk is damaged or about to permanently fail.  You will have to use your best judgement whether you trust the disk for backups.

(If I think a drive is failing, then it gets a second or third pass with badblocks and the "-p 3" option.  It will usually die during the 2nd or 3rd pass, or all of the bad spots will have been remapped and successive runs will go quickly.)

pfSense rate limiting, egress filtering, opendns filtering for wifi hotspot

One of the experiments that I'm running with the new network is running an open / unsecured WiFi hotspot for the neighbors.

Some of the protections that I'm using:

  • Uses OpenDNS servers with some categories of websites blocked.  I'm using the "OpenDNS Home" service which lets me pick and choose which categories are blocked by default.  In addition, the OpenDNS server will display a "blocked content" page for regular HTTP traffic where users can request an unblock.  Unfortunately, this feature does not work well for HTTPS (SSL) sites, but it still blocks the site.
  • Access to other DNS servers is blocked, clients can only access the two OpenDNS server IP addresses.
  • All rules on the interface are rate-limited to 3Mbps down and 1Mbps up.  This limits bandwidth abuse and slows down file sharing.
  • Heavy egress filtering.  All outbound traffic is blocked by default except for the whitelisted ports/protocols.


Now, this is not 100% foolproof.  But I at least want to limit the possible damage and take at least some steps against abuse.  I'll probably use this setup at a company that I'm consulting for where they want to offer open WiFi in their waiting area.

One thing I would like to do is setup a "Captive Portal" on the interface which forces the user to enter their cell phone number and receive a voucher code via SMS that is good for 3 or 7 days.  I have to figure out how to do that with pfSense and see how it works in practice.

The other thing I plan on doing is setting up a similar SSID/VLAN, but with higher bandwidth limits, more ports and no OpenDNS filtering for authenticated guests.  That would probably be a 20Mbps down / 5Mbps up setup protected by WPA2/PSK.  Think along the lines of "neighbors" or "friends" who you want to allow use of the internet pipe, but do not want to allow onto your interior network.  This would also be a good setup to use in an office environment for BYODs that only need internet access (such as clients).

Sunday, June 14, 2015

pfSense on Core2Duo E8400 refurbished PC

A rough power estimate for my little SFF (small form factor) refurbished PC that I'm using for a pfSense firewall:

  • Intel Core2 Duo E8400 @ 3GHz
  • 4GB RAM
  • 120GB SSD
  • Dual-port Intel PCIe x4 NIC
At idle, it consumes about 38W when the CPU throttles back to 1.8GHz.  That's pretty good for a PC that is not designed to be a low-power / fanless unit.  Under load, that goes up to about 60-65W.

38W @ $0.15/kWh = $50/yr
65W @ $0.15/kWh = $85/yr

So even if I got something that stayed below 15W, I'd only save $50-$60/yr.  A lot of the low power units are $300-$500, which would be a very long time before they'd pay off.

Note: You do need to enable PowerD under System -> Advanced -> Miscellaneous -> Power Savings in order to get the CPU to throttle down when idle.  I recommend "Hiadaptive" for an office firewall, but you might want to experiment.

The biggest CPU hog on my current pfSense setup is "ntopng".  While doing bi-directional gigabit testing, that ate up 10-12% of my CPU power.

Tuesday, June 09, 2015

pfSense Firewall CPU load estimate

According to the pfSense dashboard, I have:

Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
2 CPUs: 1 package(s) x 2 core(s)

When running a quick speed test, "top" shows about 5% system load at 60Mbps.  That gives a rough upper-end of around 1200Mbps (1.2Gbps) for switching speed.  At a guess, that might be closer to only 1Gbps performance under heavy traffic.

1Gbps of capacity is plenty for the moment where I have:

- 50/50 Mbps service from Verizon FIOS (seems to peak at 60/60)
- 802.11 b/g/n (11-54Mbps)
- 802.11ac (tops out at around 1Gbps)

But it may not be enough for connecting together multiple gigabit LAN segments.  So I will need to keep all high bandwidth traffic on the same VLAN so that the traffic gets handled by the switches without touching the pfSense firewall.

Update #1 (Jul 10 2015): Suricata cuts the performance of the WAN interface (in terms of CPU load per Mbps) by a factor of 5x-10x.  While I could probably route 1.2-1.5Gbps with this firewall, a 30Mbps load on the WAN, which is monitored by Suricata, resulted in 20% CPU load.  That puts my upper-bound for WAN traffic at only 150Mbps.

VLAN adventures with Netgear GS108T and TrendNet TEW-814DAP

As part of setting up my new home network, I'm experimenting with VLANs.  The pfSense firewall has the following user-defined VLANs on the interior port.  Each of these VLANs has a separate address range (all are IPv4 with a 24-bit netmask, i.e. 192.168.10.0/24). The pfSense firewall is always the ".1" address on each network segment and routes traffic between the segments.

em0 / 12 - Unsecured Guest WiFi
- This will (probably) be the VLAN used for an access point that is not password protected or that encrypts traffic.  I plan on limiting it to 1Mbps, plus put a SMS-authentication captive portal on it, plus point it at OpenDNS with heavy filtering.

em0 / 24 - Secured Guest WiFi
- This will be protected with an easy-to-enter WPA2/PSK password.  Suitable for handing out to people that I marginally trust or know.  No access to the internal LAN, and only selected ports allowed out.

em0 / 36 - Internal Guest WiFi
- Protected with a moderate strength WPA2/PKS password.  Suitable for friends.  Wide open access to the internet, limited access to the internal LAN.

em0 / 48 - Internal WiFi
- Protected via WPA2/PSK with a strong password.  Has full access to the internal LAN.

em0 / 87 - LAN
- This is the internal LAN network.

em0 / 100 - Infrastructure.  In a normal shop, all switches / APs would be members of this VLAN and management would only be allowed via this VLAN.

em0 / 999 - Blackhole VLAN ID (nothing should ever listen here).

That's the easy part, defining the various VLANs.  Those same VLAN IDs have to be configured in the GS108T switch as well.  This is done under Switching / VLAN / VLAN Configuration.


Note that the first three defined VLANs (1/2/3) are hard-coded into the GS108T firmware and cannot be removed.  As I indicated in a previous post on the subject of VLAN security, you should avoid using VLAN #1 for anything.  And now I would amend that to say that you should avoid VLAN IDs 1-9.

There are a few key pieces to think about when setting up VLANs:

Inbound #1 - Are the packets already tagged when they reach the switch (inbound) from another device (i.e. switch or WiFi Access Point)?

Inbound #2 - Should untagged (no VLAN header) packets inbound to the switch be blocked/dropped?

Inbound #3 - What VLAN should untagged packets be assigned to on inbound?

In the GS108T, the inbound concerns are handled under Switching / VLAN / Port PVID Configuration.  This screen will allow you to apply a VLAN tag as packets enter the switch.


In the case of the WiFi Access Point which is attached to "g2", it does not support VLAN tagging of the various SSID networks, so we have to treat it as a "dumb" device.  So when the WiFi sends packets to the switch they get assigned to VLAN #48.

Outbound #1 - Should the VLAN header be stripped as the packets leave the switch via a particular port?

Outbound #2 - Does the device attached to this port understand VLAN tags?

Outbound #3 - Should untagged packets be blocked from exiting via this port?

Egress handling is configured through Switching / VLAN / VLAN Membership and is somewhat unintuitive in the GS108T user interface.  You need to read this screen as:

"If a packet that belongs to VLAN ## is traversing the switch and about to egress (exit/outbound), what ports is it allowed to leave by and what should happen to the VLAN header?"

In the case of VLAN #48 (Internal WiFi), the answers to that are:
  • VLAN #48 is only allowed to egress via port "g2" and port "g3". 
  • "g2" is our "dumb" WiFi Access Point
  • "g3" is the "smart" pfSense firewall that understands VLAN tags
  • Packets going to the WiFi Access Point need to have VLAN headers stripped
  • Packets going to the pfSense firewall should have VLAN headers left intact


The above shows that any packets on VLAN 48 are only allowed to leave untagged (U) via "g2" (WiFi AP) or tagged (T) via "g3" (pfSense firewall).


Monday, June 08, 2015

Checking authorized_keys for duplicate SSH key lines

After a while, unless you are using Puppet or some other tool, your ~/.ssh/authorized_key file will end up with half a dozen or dozens of different SSH public key lines.  And depending on how careful you were, some of them may be duplicates or screwed up.

One way to make sense of the madness is to look at the first N bytes of each line in the ~/.ssh/authorized_keys file and look for strangeness.

$ cut --bytes=1-80 ~/.ssh/authorized_keys 
ssh-dss AAAAB3NzaC1kc3MAAACBAP0090dCcnFwtuP9Rmjgf7eHR20JdmHASXS+un4cAKNYpwHIDlA9
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC54VI+7J1DoEEiJml8JusdM4M9UNNIA8gv/JER7rQ7
qDkz/87jwJ0jufKy7XQyiiwHGg7GvqMej8enLCN90wc4xOTrFUO9FaSinWGOJmtdjVH8m7oXZ+OfClOX
h1o14nqandnzYPNyOH7iHZyVcAl082Ua1nmsesrAj7ilNPLZFiQhGhPAbWPz/O9dVBvfW+I5stRgb7FD
014

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAZB+mI3xeVeYo3B2yJqvQYUpVBrNtMmtd3iAj6O6pMIvRGzm

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsrOtkkIXu0ci/8h79/zCFgAoDZgw6yQExBs4o/KjfmB/

Just by looking at the above output, I can see that the second ssh-rsa key line was not placed on a single line as it should have been, but has line breaks.  After a quick edit of the file, now the output looks like:

$ cut --bytes=1-80 ~/.ssh/authorized_keys
ssh-dss AAAAB3NzaC1kc3MAAACBAP0090dCcnFwtuP9Rmjgf7eHR20JdmHASXS+un4cAKNYpwHIDlA9
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC54VI+7J1DoEEiJml8JusdM4M9UNNIA8gv/JER7rQ7

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAZB+mI3xeVeYo3B2yJqvQYUpVBrNtMmtd3iAj6O6pMIvRGzm

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsrOtkkIXu0ci/8h79/zCFgAoDZgw6yQExBs4o/KjfmB/

Now I can run the output of that through sort/uniq to see whether I have any duplicate SSH public key lines:

$ cut --bytes=1-80 ~/.ssh/authorized_keys | sort | uniq -c -d
      5 
      2 ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAZB+mI3xeVeYo3B2yJqvQYUpVBrNtMmtd3iAj6O6pMIvRGzm
      2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC54VI+7J1DoEEiJml8JusdM4M9UNNIA8gv/JER7rQ7

Looks like I do have a pair of duplicated SSH public key lines.  This is a good thing to know because if was trying to remove a particular SSH key pair, I might remove one line but not see the other.