User:Mjb/FreeBSD

From Offset
< User:Mjb
Revision as of 23:37, 22 July 2018 by Mjb (talk | contribs) (How much disk space to allocate?)
Jump to navigationJump to search

This is just a compilation of random notes relating to FreeBSD system administration, mainly for my own benefit. It is focused on FreeBSD 8. Any questions/comments, email me directly at root (at) skew.org.

See also:


Contents

OS installation & upgrade

How much disk space to allocate?

You need roughly 15 GB for the OS, ports, and updates (buildworld etc.).

You also need space for "userland" (home directories, websites, databases) as well as mail and temporary files; this totals another 5 GB on my modest system with few users.

You also need swap space, the ideal amount of which depends on how much physical RAM you have and how much RAM your system will ever need at one time; "twice the amount of physical RAM" used to be the recommendation, and 4 GB seems to be a fairly standard amount these days, but I find that 500 MB is plenty. Ideally, swap space should be a partition, not a file.

It used to be recommended to have separate partitions and slices (sub-partitions) for certain directories, but in my experience, it is perfectly fine to have one partition for swap and one for everything else. Consider using a separate drive for userland, so that you can recover more easily in case the OS drive dies.

Nevertheless, here's what I did when installing FreeBSD 8 with only one drive and one regular user:

  • / = 500 MB (actual use is ~340 MB) – newer releases require more than 512 MB to hold old+new kernel
  • /tmp = 500 MB (actual use is near zero for me)
  • /var = 1.5 GB (actual use is ~790 MB for me)
  • /usr = the rest (68 GB in my case, actual use is under 20 GB)
  • 1 GB of swap space on a separate partition.

See what version of the OS is actually running

The standard uname -a method actually doesn't work, because it just shows you what the OS/version/branch/patch level was when the kernel was compiled; so basically it is the kernel version. The current userland version info must be obtained some other way.

In FreeBSD 10 and up, there's a tool for this:

  • freebsd-version

By default, it reports the userland version at the time the tool was built, which should be correct, in most cases.

Otherwise, if your OS source code (/usr/src) is current, then this should work:

  • grep -v # /usr/src/sys/conf/newvers.sh | head -4

Example output:


TYPE="FreeBSD"
REVISION="8.3"
BRANCH="RELEASE-p7"

Upgrade to a new patch level

The patch level ("-p7" in the example above) correlates with security patches that were released as replacement binaries for the OS.

Of course it's possible you applied patches and rebuilt some binaries yourself, according to instructions in the security advisories you get by email (you did sign up for them, right?) ... in which case the patch level is not really accurate.

Regardless, these binary patches are only available for OS versions that were distributed as binaries, and that are still "supported", i.e. not more than 2 years old. I think this means pretty much just the latest -RELEASE branches. (-STABLE isn't distributed in binary form and they don't worry about security at all for -CURRENT.) Therefore, you may first have to do a minor version update (see next section) or new patches won't even be available for your system.

First, get the patches (maybe unset the GZIP environment variable first to reduce clutter):

  • freebsd-update fetch

It'll download them to a temporary location and tell you what will be changed. If you have the OS source code installed in /usr/src, source patches will be included in the update as well.

Now, install them:

  • freebsd-update install

Whether a reboot is needed depends on what was updated. You have to decide that yourself. Obviously anything kernel-related should make you want to do a reboot. If you don't do a reboot, but system daemons were updated, you'll need to restart those.

If you previously recompiled any of your system binaries with custom options, such as sendmail in order to enable SMTP Authentication (see below), and freebsd-update replaced those binaries, then you will have to recompile them! Otherwise, you will suddenly be running the standard version. I use a script I call rebuild_sendmail so that I don't have to look it up every time:

#!/bin/sh
cd /usr/src/lib/libsmutil
make cleandir && make obj && make
cd /usr/src/lib/libsm
make cleandir && make obj && make
cd /usr/src/usr.sbin/sendmail
make cleandir && make obj && make && make install
cd /etc/mail
make restart

This assumes the source code is being kept up-to-date. /etc/freebsd-update.conf should include src: Components src world kernel

Upgrade to a new minor version of the OS

Reference: FreeBSD Update section of the FreeBSD Handbook

The following info is based on my upgrade from 8.1-RELEASE to 8.3-RELEASE, and from 8.3-RELEASE to 8.4-RELEASE (assumes generic kernel):

Prepare the environment

I normally have "-v" in my GZIP environment variable, and this really clutters the output of freebsd-update, so unset it:

  • unsetenv GZIP

Get new files

  • freebsd-update -r 8.3-RELEASE upgrade

Takes several hours.

Merge files

Most merges will happen automatically, but some un-mergeable files like /etc/passwd will be reported, and you need to answer 'y' and merge them manually...but you don't get a nice merge interface, you just get dumped into an empty text editor! What you are expected to do here is create a merged file. Be very careful!

The goal is to compare and merge each old file from the directory tree rooted at /var/db/freebsd-update/merge/old (copied from the live system) with the corresponding new file in /var/db/freebsd-update/merge/XXX, where XXX is the new FreeBSD version you're upgrading to (e.g. 8.4-RELEASE). You need to put each merged file into the same relative location under /var/db/freebsd-update/merge/new, which is where the empty text editor will be saving to.

In my upgrade to 8.3-RELEASE, I just elected to go into the editor (you have no choice, really), loaded the old file, and saved it as-is. I didn't bother merging in the new one! Not ideal, but the least amount of hassle, right?

In my upgrade to 8.4-RELEASE, I tried a new approach: merge the files in a separate window, pre-populating the new folder, so that when the editor is opened, it's not empty, but rather has the merged file in it. Then I can just give it a once-over and save the result.

To accomplish this, in a separate terminal, as root, it would be nice to be able to run mergemaster. So I tried to do it like this:

  • mergemaster -w 100 -ciFv -m /var/db/freebsd-update/merge/8.4-RELEASE -D /var/db/freebsd-update/merge/new

However, it didn't work. I have asked about it on the freebsd-questions mailing list. Here is another, cruder method I tried, which did work:

  • cd /var/db/freebsd-update/merge/8.4-RELEASE
  • find -X . -type f | xargs -n 1 -o -I % sh -c '{ echo Now processing %. left=current, right=new, help="?"; sdiff -d -w 100 -o ../new/% ../old/% %; }'

The downside of this method is that it assumes you want to do an interactive merge (sdiff) of every file, whereas sometimes you are really going to want to save time and just choose to use the old or new file without merging; mergemaster would give you that ability.

Regardless of how you do your merge, once you've saved all the files in the editor, you'll be prompted to approve a diff for each one. If you answer "n" to any of these prompts, it will abort the entire upgrade and you will have to start over! So hopefully the merges are all OK, and you can continue.

However, among the changes you're asked to approve may be unspecified differences in /etc/pwd.db and /etc/spwd.db, the binary files that contain your password database. You have no choice but to answer "y", but for God's sake, rebuild those files before rebooting! (see below).

Review changes

freebsd-update now presents you with lists of all the files that will be deleted, all the files that will be added, and all the files that will be modified.

Pay special attention to the changes in /etc.

After showing you the lists, that's it, nothing happens. The changes are ready to be made, but nothing has actually happened yet.

Install the new files

You are about to overwrite your real system files. I suggest making a backup of /etc first:

  • cp -pr /etc /tmp/etc.backup

Cross your fingers:

  • freebsd-update install

Rebuild soon-to-be-clobbered databases

Now, unless you got mergemaster to work, you probably have to do the things that mergemaster normally would do for you.

It seems things don't get replaced until after reboot. This may be a real problem!

If /etc/passwd or /etc/master.passwd were changed or if /etc/pwd.db or (most importantly, I think) /etc/spwd.db changed (e.g., as in 8.4-RELEASE, got set to new defaults), then a pwd_mkdb run will be necessary to regenerate the .db files, and you want to do this before your shutdown or you'll never get to log back in.

Normally you would do this:

  • pwd_mkdb -p /etc/master.passwd

This will use /etc/master.passwd as the source file, and the -p means generate a new /etc/passwd from it, in addition the the .db files.

However, the files in /etc are, at this stage, untouched. The new versions are sitting gzipped in /var/db/freebsd-update/files, a huge dumping ground with no sub-structure. An index to the files is in /var/db/freebsd-update/install.XXXXX/INDEX-NEW, where XXXXX is a random ID; look at the directory creation date to figure out which one is current, if there's more than one.

So I think what you need to do is something like this, to inspect the new files:

  • cd /var/db/freebsd-update
  • mkdir -m 0700 /tmp/oldpwdfiles
  • zcat files/`grep '^/etc/master\.passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/master.passwd
  • zcat files/`grep '^/etc/passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/passwd
  • zcat files/`grep '^/etc/pwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/pwd.db
  • zcat files/`grep '^/etc/spwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/spwd.db
  • ls -l /tmp/oldpwdfiles
total 10
 6 -rw-r--r--  1 root  wheel   4.0k Jun 25 00:48 master.passwd
 4 -rw-r--r--  1 root  wheel   3.2k Jun 25 00:49 passwd
 0 -rw-r--r--  1 root  wheel     0B Jun 25 00:49 pwd.db
 0 -rw-r--r--  1 root  wheel     0B Jun 25 00:49 spwd.db

Obviously pwd.db and spwd.db are crap and we'd be in trouble if we installed those empty files!

If /tmp/oldpwdfiles/master.passwd looks OK, then try generating a new passwd file and pair of .db files:

  • mkdir -m 0700 /tmp/newpwdfiles
  • pwd_mkdb -d /tmp/newpwdfiles -p /tmp/oldpwdfiles/master.passwd
  • ls -l /tmp/newpwdfiles
total 138
  6 -rw-------  1 root  wheel   4.0k Jun 25 00:48 master.passwd
  4 -rw-r--r--  1 root  wheel   3.2k Jun 25 00:53 passwd
 68 -rw-r--r--  1 root  wheel    68k Jun 25 00:53 pwd.db
 60 -rw-------  1 root  wheel    60k Jun 25 00:53 spwd.db

Quite a bit better. As you can see, master.passwd was just moved over, and the other three files were generated. Now to replace them:

  • gzip /tmp/newpwdfiles/*
  • mv /tmp/newpwdfiles/master.passwd.gz files/`grep '^/etc/master\.passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz
  • mv /tmp/newpwdfiles/passwd.gz files/`grep '^/etc/passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz
  • mv /tmp/newpwdfiles/pwd.db.gz files/`grep '^/etc/pwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz
  • mv /tmp/newpwdfiles/spwd.db.gz files/`grep '^/etc/spwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz

And finally, clean up:

  • rm -fr /tmp/oldpwdfiles /tmp/newpwdfiles

You'll have to go through a similar process if you use sendmail and you merged in any changes to /etc/mail/aliases or /etc/mail/*.cf files. Ordinarily, the most thorough way is this:

  • cd /etc/mail; make all
  • make install
  • make restart

But as before, the files haven't been installed yet!

Likewise, changes to /etc/login.conf require rebuilding a database:

  • cap_mkdb (see the man page for exact syntax)

Same for /etc/services:

  • services_mkdb (see the man page for exact syntax)

There's a bug filed about this, but only for the master.passwd; it doesn't take into account this latest development where .db files are clobbered: http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/165954

Reboot and continue

OK, now reboot to try out the new kernel:

  • shutdown -r now (again, this assumes you want the generic kernel)

Hope & pray it comes back up. If it does, do this again to get world installed:

  • freebsd-update install

This worked for me, for the upgrade to 8.3-RELEASE.

For the 8.4 upgrade, after this stage, it said:

Completing this upgrade requires removing old shared object files.
Please rebuild all installed 3rd party software (e.g., programs
installed from the ports tree) and then run "/usr/sbin/freebsd-update install"
again to finish installing updates.

Worry about that in a minute. First, realize that at this point, /etc has been modified, so it's a good idea to make sure you like the look of the new files, especially these:

  • /etc/master.passwd
  • /etc/group
  • /etc/mail/* (if changed, you need to run the appropriate make command in /etc/mail ...perhaps make all install restart)
  • /etc/services (if changed, you need to run services_mkdb -q to rebuild /var/db/services.db)
  • /etc/login.conf (if changed, freebsd-update should've run cap_mkdb to rebuild login.conf.db)

If anything's amiss, remember you made a backup in /tmp/etc.

OK, now you can follow the directions below to update your ports tree and rebuild everything(!). Personally I don't like doing this because things tend to go wrong if you don't do it piecemeal. The downside is that some things will be left un-updated. But you can deal with that; read on...

Check for cruft

After the upgrade, you might want to see if anything out-of-date got left behind:

  • cd /usr/src && make check-old

If there's anything, you can run make delete-old to get rid of it; it will ask you about each file, normally. Ref: http://www.freebsd.org/doc/handbook/make-delete-old.html

There are a couple of options for checking the installed shared libraries:

  • If you install the sysutils/bsdadminscripts port, you can run pkg_libchk to check for missing libraries. It even tells you which ports are affected.
  • If you install the sysutils/libchk port (note: requires Ruby), you can run libchk to check for missing libraries, check for unused libraries, and see exactly which binaries use each library. To figure out which port installed the file needing the library, you need to run pkg info -W /path/to/the/file.

Sample output of pkg_libchk:

gamin-0.1.10_4: /usr/local/libexec/gam_server misses libpcre.so.0
gio-fam-backend-2.28.8_1: /usr/local/lib/gio/modules/libgiofam.so misses libpcre.so.1

Rebuilding these two ports should be sufficient to get them linked to the current libpcre library. (Double-checking /usr/local/lib shows that there's a libpcre.so.3 now).

Why did I have these ports installed? pkg info -R gamin-0.1.10_4 tells me gamin is required by gio-fam-backend, and pkg info -R gio-fam-backend-2.28.8_1 reveals that gio-fam-backend isn't required by anything that I currently have installed. This is a weird port, though, and it is not something you want to deinstall. It is FreeBSD-specific, and is kind of a companion to the glib port. (Though apparently they decommissioned it - see the 20130731 entry in UPDATING). pkg info -R glib-2.34.3 reveals what's using glib: ImageMagick & MediaWiki.

Anyway, portmaster --update-if-newer gio-fam gamin takes care of the problem. Now when I run pkg_libchk gamin-0.1.10_5 and pkg_libchk gio-fam-backend-2.34.3 (the new versions), there are no problems. The question now is whether I need to update ImageMagick. The lack of problems reported by pkg_libchk ImageMagick-nox11-6.8.0.7_1 suggests the answer is no.

Reboot to restart daemons

After upgrading from 8.3-RELEASE to 8.4-RELEASE, /var/log/messages started accumulating error messages from sshd, every time someone tried to log in:

error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key

Indeed, that key file didn't exist until after another reboot, which didn't happen until a mysterious, probably unrelated crash a month after the upgrade.

Web searches suggest that most people running into this problem aren't able to log in at all until they run a special ssh_keygen command to create the missing files, but I was having no such trouble.

I think that for me, the only problem was that after finishing the OS upgrade, sshd needed to actually be restarted. This makes me think that maybe it's a good idea to restart all the daemons as the penultimate step in upgrading the OS. To do that, you could run service -R, but it might be easier to just reboot.

Ports installation & upgrade

Here's some general info about this topic.

Get a quick list of installed ports

  • with pkgng, pkg info -aoq | sort
  • without pkgng, pkg_info -aoq | sort

Portmaster flags

Some of the most useful flags for portmaster:

  • -d will make it delete old distfiles after each port is installed, rather than asking you about it. (-D would make it keep them.)
  • -b will make it keep the backup it made when installing the previous version of a port. It usually deletes the backup after successfully installing a new version.
  • -x pattern will make it exclude ports (including dependencies) that match the glob pattern.
  • --update-if-newer will prevent rebuilding/reinstalling ports that don't need it. But for some reason, you have to specify more than one port on the command-line for this to work.
  • -n depends on what else you are doing. Usually it means do a dry run. But in conjunction with -e pkgdbfolder, -s, --clean-distfiles, --clean-packages, --check-depends, or --check-port-dbdir, it means "answer no to all questions."
  • --packages will make it use a package (major timesaver) for both the port if the latest package isn't older than the version in the ports collection. Otherwise, it falls back on building the port.
  • --build-packages will make it try to use packages for build dependencies... I haven't figured this one out yet. It seems to not be necessary?

Here's an example (to update Perl modules, and Perl if needed):

  • portmaster -d --update-if-newer --packages p5-

Environment prep

If you have set your BZIP2 environment variable to include -v, like I have, and you have portaudit installed, then you will probably find that every time you do anything with ports or packages, you get a bunch of useless lines that say /var/db/portaudit/auditfile.tbz: done, and FreeBSD's /usr/ports/Mk/bsd.port.mk misinterprets this as problems needing to be fixed.

  • unsetenv BZIP2

I reported this bug to the freebsd-ports mailing list, but I doubt it will get fixed unless I submit a patch, myself.

(Looks like this was eventually fixed as part of pkgng integration.)

Update portmaster

Probably a good idea before doing anything else with portmaster.

  • portmaster --packages portmaster

Since --update-if-newer needs multiple packages to be specified, we can't use it here. Thus, if there's nothing to update, you will end up reinstalling the same version you already had.

Check integrity of existing ports

  • portmaster --check-depends

Delete cached options from previous builds of stale ports

This just does some cleanup of /var/db/ports, which is where the options you chose in the 'make config' step of port building are stored. The options for ports that are currently properly installed will be left alone.

  • portmaster --check-port-dbdir

Update ports collection

The ports collection is a folder tree containing Makefiles and patches for 3rd-party software. Anytime you want to add or update 3rd-party software, first make sure the ports collection is up-to-date. Reference: [1]

First time using portsnap or just want a fresh tree? Download the current ports tree to a temporary location (fetch), then install it in /usr/ports, replacing whatever was there before (extract):

  • portsnap fetch extract

Not the first time? Download updates to a temporary location (fetch), then apply them to the existing ports tree (update), deleting any modified or added files:

  • portsnap fetch update

Now go look at /usr/ports/UPDATING.

See what packages need updating

  • pkg audit will tell you which installed packages have security vulnerabilities.
  • pkg version -v -l "<" will tell you what installed packages could be upgraded from the packages collection. It's fast.
  • pkg version -P -v -l "<" will tell you what installed packages could be upgraded from the ports collection. It's slow.

The upgrade info is based on the info in /usr/ports, not by seeing what's new online.

Some ports will just have a portrevision bump due to changes in the port's Makefile. These are usually unimportant and not worth the pain of rebuilding and reinstalling.

Dealing with port upgrade problems

A port has moved

The Handbook doesn't cover this, but sometimes the ports collection folder for a port that you've installed will get moved.

These moves are listed in /usr/ports/MOVED, which is read by portmaster. So, although you could look at that file beforehand, you probably won't find out about a move until you run portmaster --check-depends, or when you try to update your installed port.

For example, there was once a www/mediawiki meta-port, which pointed to the actual port for the latest stable version. I had used it to install mediawiki119. When I went to update it with portmaster www/mediawiki, I got the following error:

        ===>>> The www/mediawiki port moved to www/mediawiki119
        ===>>> Reason: Rename mediawiki to mediawiki119

The first place to look when you see this message is /usr/ports/UPDATING. Often, there will be a note about it there, with instructions. In this case, though, there wasn't, so I asked about it on freebsd-ports and also on freebsd-doc. I was told that UPDATING will only have unusual things in it, and this particular situation didn't qualify, because the version hadn't actually changed.

I don't think there's a way to just update the list of installed packages so that it will know about the move. You have to want to update the port, and then use portmaster's -o flag to say which new port you want to replace the old one with.

So, for an ordinary move, the answer is:

  • portmaster -o NEWPORT INSTALLEDPORT

For example, I could have updated without changing the version:

  • portmaster -o www/mediawiki119 www/mediawiki

But since there was a newer version available, I decided to update to it:

  • portmaster -o www/mediawiki120 www/mediawiki

lzma library errors

This probably won't come up again, but maybe it will help someone else. After updating to 8.4-RELEASE, I was trying to rebuild the PHP port (as part of the MediaWiki upgrade), but it failed early in the process with this message:

  • checking whether libxml build works... no
    configure: error: build test failed. Please check the config.log for details.
    ===> Script "configure" failed unexpectedly.
    Please report the problem to ale@FreeBSD.org [maintainer] and attach the
    "/usr/ports/lang/php5/work/php-5.4.16/config.log" including the output of the
    failure of your make command. Also, it might be a good idea to provide an
    overview of all packages installed on your system (e.g. a /usr/sbin/pkg_info
    -Ea).
    *** Error code 1
    Stop in /usr/ports/lang/php5.

Looking at that config.log file, I saw more detail:

  • configure:21972: checking whether libxml build works
    configure:21999: cc -o conftest -O2 -pipe -march=pentium3 -fno-strict-aliasing -fvisibility=hidden -R/usr/local/lib -L/usr/local/lib conftest.c -lm -lxml2 -lz -liconv -lm >&5
    /usr/local/lib/libxml2.so: undefined reference to `lzma_code@XZ_5.0'
    /usr/local/lib/libxml2.so: undefined reference to `lzma_properties_decode@XZ_5.0'
    /usr/local/lib/libxml2.so: undefined reference to `lzma_end@XZ_5.0'
    /usr/local/lib/libxml2.so: undefined reference to `lzma_auto_decoder@XZ_5.0'
    configure:21999: $? = 1
    configure: program exited with status 1

On a hunch, I decided to see what would happen if I tried to restart Apache:

  • httpd: Syntax error on line 108 of /usr/local/etc/apache22/httpd.conf: Cannot load /usr/local/libexec/apache22/libphp5.so into server: /usr/local/lib/liblzma.so.5: version XZ_5.0 required by /usr/local/lib/libxml2.so.5 not defined

When Googling for answers, I found some mention that ports needing the lzma port now need to use the xz port. Something doesn't sound right about that, though, because the xz port is deprecated as well.

It turns out that at some point, the xz port had been installed, needed by some other port. This resulted in some "lzma" libs being placed in /usr/local/lib a very long time ago. Better lzma libs later became part of the base system in /usr/lib. Since the old libs were still sitting in /usr/local/lib, they were taking precedence when other ports needed them. This eventually prevented the PHP port from building, due to its reliance on libxml2, which in turn relies on liblzma, which needs to be up-to-date.

Simply moving the outdated libs out of /usr/local/lib took care of the problem. Specifically, it was /usr/local/lib/liblzma.*. Really, though, the solution is to pkg_delete xz-5.03 (or whatever version you have).

more lzma library errors

While attempting to upgrade all of my installed ports on another occasion in late 2013, the graphics/gd port failed to build because libtool was looking for the nonexistent /usr/local/lib/liblzma.la. A 2011 discussion about it suggested the fix might be as easy as deleting and reinstalling ImageMagick:

  • cd /usr/ports/graphics/ImageMagick-nox11
  • make deinstall clean install

This just led to the same kind of failure when the build tried to link in ImageMagick's tiff coder. So I tried rebuilding the underlying lib first:

  • cd /usr/ports/graphics/tiff
  • make deinstall clean install

Then I went back to the ImageMagick build:

  • cd /usr/ports/graphics/ImageMagick-nox11
  • make

That got me past the tiff coder error, so I continued:

  • make install

That worked as well.

ImageMagick's enormous set of dependencies and lengthy build process have been problematic for me in the past. I'd rather exclude it from any port upgrades, but I'm not sure it's possible or wise to do so.

First installation of specific ports

MySQL

  • Install the databases/mysql##-server port. This will also install the client; no need to install the client port separately.
  • Close off access to the server from outside of localhost by making sure this is in /var/db/mysql/my.cnf:
[mysqld]
bind-address=127.0.0.1

Also, if you have enabled an ipfw firewall, you can put similar ipfw rules somewhere like /etc/rc.local. For example (replace X with your IP address in all 3 places, and make sure to actually run these commands if you're not gonna reboot):

# only allow local access to MySQL
ipfw add 3000 allow tcp from X to X 3306
ipfw add 3001 deny tcp from any to X 3306
  • Make sure mysql_enable="YES" is in /etc/rc.conf, then run /usr/local/etc/rc.d/mysql-server start. MySQL is now running but is insecure; you need to set the root password and delete the anonymous accounts as described in the manual at http://dev.mysql.com/doc/refman/5.1/en/default-privileges.html ... however, if you're also restoring data from backups, you can skip this step since your backups hopefully include the 'mysql' database which has all the user account data in it!

You need to set a root password for MySQL. This is one way (where PWD is the password you want to use):

  • mysqladmin -u root password PWD

Or, if you already have a backup of the 'mysql' database, such as made by my script, you can just load that backup, because the usernames and passwords are stored in there.

To restore from backups:

  1. Unzip the latest backup file in /usr/backup/mysql/daily (see above for the script that puts the backup files there).
  2. Run mysql < backupfile.sql to load the data, including user tables & passwords.
  3. Run mysql_upgrade to verify that the data is all OK to use with this version of MySQL.
  4. mysql -u root should now give an error for lack of password. Time to install MediaWiki?

MediaWiki

The www/mediawiki port installs MediaWiki, which is small, but its dependencies result in the installation of many other ports, including ImageMagick (assuming you are supporting image uploads), Ghostscript, libxslt, docbook-xsl, and Python.

For ImageMagick, since I'm only using it in MediaWiki, I disable its X11 and Perl support, and formats I don't care about like FPX, JBIG, JPEG2000.

sa-utils

sa-utils is an undocumented port that installs the script /usr/local/etc/periodic/daily/sa-utils. The purpose of the script is to run sa-update and restart spamd every day, so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.

  • Install the mail/sa-utils port. when prompted, enable sa-compile support.
  • Put whatever flags sa-update needs in /etc/periodic.conf. For me, it's:
    daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org" and, after I've confirmed it's working OK, daily_sa_quiet="yes".
  • Assuming you enabled sa-compile support, uncomment this line in /usr/local/etc/mail/spamassassin/v320.pre:
    loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody

That's it.

Now, if you don't want to install sa-utils, but you are running SpamAssassin, you'll want a cron job that updates SpamAssassin rules and restarts spamd every day. Here's the basic version I used to use for the core rules:

  • /usr/local/bin/sa-update --nogpg --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart

After using that for years, I switched to a version that incorporates SpamAssassin developer Justin Mason's "sought.cf" ruleset. First, outside of crontab, add the channels' GPG keys to sa-update's keyring:

The caveat here is that the keys will eventually expire. For example, the one for sought.rules.yerp.org expires on 2017-08-09. At that point, you'll have to notice that the updates stopped working, and get a new key. To see the keys on sa-update's keyring, you can do this:

  • gpg --homedir /usr/local/etc/mail/spamassassin/sa-update-keys --list-key

So here's what goes in the crontab:

  • env PATH=/usr/bin:/bin:/usr/local/bin /usr/local/bin/sa-update -v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart

The reason I override the cron environment's default path of /usr/bin:/bin is because sa-update needs to run the GPG tools in /usr/local/bin.

However, like I said, instead of a cron job, I'm using sa-utils now.

tt-rss

The www/tt-rss port is Tiny Tiny RSS, a web-based feed reader I'm now using instead of Google Reader.

  • portmaster www/tt-rss
  • mysql -pYYYYY
    • create database ttrss;
    • connect ttrss;
    • source /usr/local/www/tt-rss/schema/ttrss_schema_mysql.sql;
    • quit;
  • edit /usr/local/www/tt-rss/config.php:
    • DB_USER needs to be root (I didn't bother creating a special user...)
    • DB_NAME needs to be ttrss
    • DB_PASS needs to be whatever's appropriate for DB_USER
    • DB_PORT needs to be 3306
    • SELF_URL_PATH needs to be whatever is appropriate
    • FEED_CRYPT_KEY needs to be 24 random characters
    • REG_NOTIFY_ADDRESS needs to be a real email address
    • SMTP_FROM_ADDRESS needs to at least have your real domain
  • cp /usr/local/share/tt-rss/httpd-tt-rss.conf /usr/local/etc/apache22/Includes/
  • /usr/local/etc/rc.d/apache22 reload
  • visit http://yourdomain/tt-rss/
Startup failed
Tiny Tiny RSS was unable to start properly. This usually means a misconfiguration
or an incomplete upgrade. Please fix errors indicated by the following messages:

	FEED_CRYPT_KEY requires mcrypt functions which are not found.

The solution, after making sure mcrypt isn't mentioned in /usr/ports/www/tt-rss/Makefile:

  • portmaster security/php5-mcrypt
  • /usr/local/etc/rc.d/apache22 restart
  • visit http://yourdomain/tt-rss/ and you should get a login screen. u: admin, p: password.
  • Actions > Preferences > Users. Select checkbox next to admin, choose Edit. Enter new password in authentication box.

The password is accepted, but subsequent accesses to all but the main Preferences page result in "{"error":{"code":6}}". There's nothing in the ttrss_error_log table in the database. Apache error log shows a few weird things, but nothing directly related:

File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php
File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php
File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php
File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php

Logging in again seems to take care of it, unless I change the password again. This only affects the admin user.

Create a new user, and login as that user. Subscribe to some feeds. Feeds won't update at all unless you double-click on their names, one by one.

Now the update daemon:

  • In /etc/rc.conf, add ttrssd_enable="YES"
  • /usr/local/etc/rc.d/ttrssd start

Feeds should now update automatically, as per the interval defined in Actions > Preferences > Default feed update interval. Minimum value for this, though, is 15 minutes. This can also be overridden on a per-feed basis.

Themes are installed by putting uniquely named .css files (and any supporting files & folder) in tt-rss's themes/ directory. I decided to try clean-greader for a Google Reader-like experience. It works great, but I'm not happy with some of it, especially its thumbnail-izing of the first image in the feed content, so I use the Actions > Preferences > Customize button and paste in this CSS:

/* use a wider view for 1680px width screens, rather than 1200px (see also 1180px setting below) */
#main { max-width: 1620px; }

/* preferences help text should be formatted like tt-rss.css says, and make it smaller & italic */
div.prefHelp {
    color : #555;
    padding : 5px;
    font-size: 80%;
    font-style: italic;
}

/* tidy up feed title bar, especially to handle feed icons, which come in wacky sizes */
img.tinyFeedIcon { height: 16px; }
div.cdmFeedTitle {
background-color: #eee;
padding-left: 2px;
height: 16px; }
a.catchup {
  padding-left: 1em;
  color: #cdd;
  font-size: 75%;
  font-style: italic;
}

/* Narrower left margin (44px instead of 71px), greater width (see also #main above) */
.claro .cdm.active .cdmContent .cdmContentInner,
.claro .cdm.expanded .cdmContent .cdmContentInner {
  padding: 0 8px 0 50px;
  max-width: 1180px;
}

/* main feed image is often real content, e.g. on photo blogs, so don't shrink it */
  .claro .cdm.active .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img,
  .claro .cdm.active .cdmContent .cdmContentInner p:first-of-type img,
  .claro .cdm.active .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child,
  .claro .cdm.expanded .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img,
  .claro .cdm.expanded .cdmContent .cdmContentInner p:first-of-type img,
  .claro .cdm.expanded .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child {
  float: none;
  margin: 0 0 16px 0 !important;
  max-height: none;
  max-width: 100%;
}

/* scroll bars are too hard to see by default */
::-webkit-scrollbar-track {
  background-color: #ccc;
}
::-webkit-scrollbar-thumb {
  background-color: #ddd;
}

py-fail2ban

After installing the port, create /usr/local/etc/fail2ban/action.d/bsd-route.conf with the following contents:

# Fail2Ban configuration file
#
# Author: Michael Gebetsroither, amended by Mike J. Brown
#
# This is for blocking whole hosts through blackhole routes.
#
# PRO:
#   - Works on all kernel versions and as no compatibility problems (back to debian lenny and WAY further).
#   - It's FAST for very large numbers of blocked ips.
#   - It's FAST because it Blocks traffic before it enters common iptables chains used for filtering.
#   - It's per host, ideal as action against ssh password bruteforcing to block further attack attempts.
#   - No additional software required beside iproute/iproute2
#
# CON:
#   - Blocking is per IP and NOT per service, but ideal as action against ssh password bruteforcing hosts

[Definition]

# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
actionstart =


# Option:  actionstop
# Notes.:  command executed once at the end of Fail2Ban
# Values:  CMD
#
actionstop =


# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =


# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionban   = route -q add <ip> 127.0.0.1 <routeflags>


# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionunban = route -q delete <ip> 127.0.0.1

[Init]

# Option:  routeflags
# Note:    Space-separated list of flags, which can be -blackhole or -reject
# Values:  STRING
blocktype = -blackhole

Also create /usr/local/etc/fail2ban/jail.local. In it, you can override examples in jail.conf, and add your own:

[apache-badbots]
enabled = true
filter = apache-noscript
action = bsd-route
         sendmail-buffered[name=apache-badbots, lines=5, dest=root@yourdomain]
logpath = /var/log/www/*/*error_log

[apache-noscript]
enabled = true
filter = apache-noscript
action = bsd-route
         sendmail-whois[name=apache-noscript, dest=root@yourdomain]
logpath = /var/log/www/*/*error_log

[sshd]
enabled = true
filter = bsd-sshd
action = bsd-route
         sendmail-whois[name=sshd, dest=root@yourdomain]
logpath = /var/log/auth.log
maxretry = 6

[sendmail]
enabled = true
filter = bsd-sendmail
action = bsd-route
         sendmail-whois[name=sendmail, dest=root@yourdomain]
logpath = /var/log/maillog

Be sure to replace yourdomain. Check for errors with the command fail2ban-client -d | grep '^ERROR' || echo no errors.

In /etc/rc.conf, add the line fail2ban_enable="YES" and then run /usr/local/etc/rc.d/fail2ban start

Disable any cron jobs that were doing work that you expect fail2ban to now be doing.

Check your log rotation scripts to make sure they create new, empty files as soon as they rotate the old logs out. Apache HTTPD, for example, won't create a new log until there's something to put in it, and if fail2ban notices the logfile is missing for too long, it will disable the jail.

Because you're going to get mail from fail2ban@yourdomain, set up an alias for this account so that any bounces (e.g. due to network problems) will go to the alias.

Rootkit Hunter

Install the program and set up its database:

  • portmaster security/rkhunter – this will install wget as well.
  • rehash
  • rkhunter --propupd
  • rkhunter --update

Run the program once to see if it finds anything:

  • rkhunter --check

As per the Rootkit Hunter FAQ, assuming nothing looks wrong, but you got warnings about script replacement, generate a list of SCRIPTWHITELIST entries for you to manually add to the appropriate section of /usr/local/etc/rkhunter.conf:

  • awk -F"'" '/replaced by a script/ {print "SCRIPTWHITELIST="$2}' /var/log/rkhunter.log

There are more examples at the bottom of the FAQ.

Beware if you are running rkhunter from an interactive shell and have aliased 'ls' and/or have it configured for color, the unexpected output may not be parsed properly during the 'filesystem' tests, and you will get bogus warnings about hidden directories:

[04:04:54] Warning: Hidden directory found: ?[1m?[38;5;6m/usr/.?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/usr/.^[[39;49m^[[m' (No such file or directory)
[04:04:54] Warning: Hidden directory found: ?[1m?[38;5;6m/usr/..?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/usr/..^[[39;49m^[[m' (No such file or directory)
[04:04:55] Warning: Hidden directory found: ?[1m?[38;5;6m/etc/.?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/etc/.^[[39;49m^[[m' (No such file or directory)
[04:04:55] Warning: Hidden directory found: ?[1m?[38;5;6m/etc/..?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/etc/..^[[39;49m^[[m' (No such file or directory)

If it happens to you, do whatever is needed to get 'ls' to behave normally, or add filesystem to the DISABLE_TESTS line in /usr/local/etc/rkhunter.conf.

The port adds a script to /usr/local/etc/periodic/security. You can enable it by adding to /etc/periodic.conf:

daily_rkhunter_update_enable="YES"
daily_rkhunter_update_flags="--update --nocolors"
daily_rkhunter_check_enable="YES"
daily_rkhunter_check_flags="--cronjob --rwo"

Alternatively, you can just add this to root's crontab:

# run Rootkit Hunter every day at 1:06am
06 01 * * * /usr/local/bin/rkhunter --cronjob --update --rwo

Upgrading specific ports

Certain installed ports (3rd-party software packages) require extra attention when you want to update them with portmaster. Because of this, you can't just update all of your third-party software in one swoop; it's best to do certain ones separately. Here are some notes for the more difficult ones I ran across.

Upgrade Perl and Perl modules

Instructions for major and minor version updates are separate entries in /usr/ports/UPDATING. One thing they didn't make at all clear is that (prior to 2013-06-12), perl-after-upgrade is supposed to be run after updating modules; it won't find anything to do otherwise. So, to go from 5.12 to 5.16, I did this:

  1. portmaster -o lang/perl5.16 lang/perl5.12
  2. portmaster p5-
  3. perl-after-upgrade -f
  4. Inspect the old version's folders under /usr/local/lib/perl5 and /usr/local/lib/perl5/site_perl. Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.

When there's a perl patchlevel update (e.g. 5.16.2 to 5.16.3), UPDATING might tell you to upgrade everything Perl-related via portmaster -r perl. I'm not a big fan of this. Somehow, pretty-much everything on the system is tied to Perl, including Apache, MediaWiki, you name it. I don't understand why.

It is possible to upgrade just Perl itself, and the modules:

  1. portmaster perl
  2. portmaster p5-
  3. perl-after-upgrade -f

perl-after-upgrade doesn't exist anymore. Starting with Perl 5.12.5 / 5.14.3 / 5.16.3, they dropped the patchlevel from the folder names in /usr/local/lib/perl5 and /usr/local/lib/perl5/site_perl, and the installer handled it automatically.

Update SpamAssassin and related

The SpamAssassin port is now mail/spamassassin, not mail/p5-Mail-SpamAssassin. See UPDATING.

For the options I've chosen, this will update various Perl modules, gettext, libiconv, curl, libssh2, ca_root_nss, gnupg1.

  • portmaster --packages mail/p5-Mail-SpamAssassin
  • portmaster --packages mail/spamssassin

The port is rather clumsy in that it deletes /usr/local/etc/mail/spamassassin/sa-update-keys, so after the update, I have to re-import the GPG key for the "sought" ruleset.

  • fetch http://yerp.org/rules/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY

I asked about this on the mailing list, and cc'd the port maintainer, but no word yet.

If everything has installed correctly, restart sa-spamd when it's done. It probably stopped running during the install.

As of 3.4.0, if your system doesn't support IPv6, spamc will complain that it can't connect to spamd on ::1. To work aroudn this, you need to add the new -4 flag (to force/prefer IPv4) in two places:

  • /usr/local/etc/mail/spamassassin/spamc.conf
  • spamd_flags in /etc/rc.conf

Update MySQL

Oracle is now calling it MySQL Community Server.

Don't update more than one minor version at a time (e.g., the docs say go from 5.5 to 5.6 before going to 5.7).

The actual databases shouldn't be affected by a minor version bump of MySQL. But of course, you should still consider making a fresh backup first:

  • mysqldump -E -uXXXXX -pYYYYY --all-databases | bzip2 -c -q > /tmp/mysql-backup.sql.bz2

Here's what I did when going from 5.5 to 5.6. I'm not sure it was really necessary to stop the 5.5 server and delete the 5.5 packages, but it seemed like a good idea in case there would be conflicts.

  • service mysql-server stop
  • pkg delete -f mysql\* (if you don't do the -f it will also try to remove dependencies like mediawiki)
  • portmaster -d databases/mysql-server56 (the client's dependencies now include Python and libxml, so it takes a while)
  • service mysql-server start
  • mysql_upgrade -uXXXXX -pYYYYY
  • service mysql-server restart

You should make sure MediaWiki and any other MySQL-dependent apps still work after doing this.

MySQL backup script

This simple script I wrote keeps a week's worth of daily backups of the database. I run it every day via cron.

MYSQLUSER and MYSQLPASSWD must be set to real values, not XXXXX and YYYYY; and DUMPDIR and ARCHIVEDIR must point to writable directories.

If there's a more secure way of handling this, let me know!

#!/bin/sh

DUMPDIR=/usr/backup/mysql/daily
ARCHIVEDIR=/usr/backup/mysql/weekly
MYSQLUSER=root
MYSQLPASSWD="put_your_password_here"
# Monday=1, Sunday=7
ARCHIVEDAY=7

DATE=`/bin/date "+%Y%m%d"`
BZIP=/usr/bin/bzip2
DUMPER=/usr/local/bin/mysqldump
DAYOFWEEK=`/bin/date "+%u"`
CHECKER=/usr/local/bin/mysqlcheck

# Create an empty file named '.offline' in the document root folder of each
# website that needs to not be accessing the database during the backup.
# This assumes the web server config or index scripts in those folders will
# temporarily deny access as appropriate.
touch /usr/local/www/mediawiki/.offline
touch /usr/local/www/tt-rss/.offline

set clobber
if [ -d ${DUMPDIR} -a -w ${DUMPDIR} -a -x ${DUMPER} -a -x ${BZIP} ] ; then
  OUTFILE=${DUMPDIR}/mysql-backup-${DATE}.sql.bz2
  echo "Backing up MySQL databases to ${OUTFILE}..."
  # -E added 2013-04-17 to get rid of warning about events table not being dumped
  ${DUMPER} -E -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --add-drop-database | ${BZIP} -c -q > ${OUTFILE}
else
  echo "There was a problem with ${DUMPDIR} or ${DUMPER} or ${BZIP}; check existence and permissions."
  exit 1
fi

if [ -d ${ARCHIVEDIR} ] ; then
  if [ ${DAYOFWEEK} -eq ${ARCHIVEDAY} ] ; then
    echo "It's archive day. Archiving ${OUTFILE}..."
    /bin/cp -p ${OUTFILE} ${ARCHIVEDIR}
    echo "Deleting daily backups older than 1 week..."
    /usr/bin/find ${DUMPDIR} -mtime +7 -exec rm -v {} \;
  fi
else
  echo "Today would have been archive day, but ${ARCHIVEDIR} does not exist."
  exit 1
fi

if [ -x ${CHECKER} ] ; then
  echo "Checking & repairing tables..."
  ${CHECKER} -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --medium-check --auto-repair --silent
  echo "Optimizing tables..."
  ${CHECKER} -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --optimize --silent
  echo "Done."
fi

# Remove the '.offline' files
rm -f /usr/local/www/mediawiki/.offline
rm -f /usr/local/www/tt-rss/.offline

One downside of this script is that even on my small database, it takes a little while to run, like 15 minutes or so. While it's running, the database tables are locked (read-only). You don't want your database-backed websites to be doing stuff until the dump is finished. So I temporarily take those sites offline by doing a touch .offline to create an empty file named ".offline" in each of the sites' root folders, and then when the backup is done, there's a rm .offline for each one. In those site folders is a "site temporarily offline for backups" HTML page and a .htaccess with the following:

ErrorDocument 503 /.site_offline.html
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}/\.offline -f
RewriteCond %{REQUEST_URI} !/\.site_offline\.html
RewriteRule .* - [R=503,L]

Really there's no reason to write the temporary .offline file in the server root; you could put it in /tmp or wherever, and make the first RewriteCond look for it there. You could also hard-code the path in that RewriteCond directive; %{DOCUMENT_ROOT} may not point where you want if you're using Alias directives.

Upgrade Apache from 2.2 to 2.4

In mid-2014, Apache 2.4 became the default version in ports, and also db4 ports are deprecated. The only thing I had that was using db4 was apr (base libs needed by Apache), and it wasn't really using it, so I went ahead and just deleted the installed db4 versions, and added USE_BDB_VER=5 to my /etc/make.conf (apr can't use db6 yet).

Then I upgraded Apache to 2.4. It does require some Apache downtime and uninstalling 2.2 (!) because the 2.4 port will abort installation when it sees that some 2.2 files are in the way.

  1. remove any forcing of apache22 from /etc/make.conf
  2. build apr + apache24 from ports
  3. stop and delete apache22
  4. install apache24
  5. edit .conf files in /usr/local/etc/apache24 (see notes below)
  6. upgrade lang/php5
  7. install www/mod_php5 with same options as lang/php5 (yes, they split the Apache module into a separate port again!)
  8. 'service apache24 start' and cross your fingers
  9. in /etc/rc.conf, s/apache22_enable/apache24_enable/

Config file editing...

Every time you edit, use 'apachectl configtest' to check for problems. Some things to watch for:

  • Many modules are not enabled by default, but you probably want to enable a bunch of them, like these: include_module, deflate_module, actions_module, rewrite_module, ssl_module and socache_shmcb_module, cgi_module, userdir_module, php5_module, any proxy modules you need.
  • For the most part, you can copy-paste everything from the apache22 files, but don't include any allow/deny directives. Use the new format as explained at https://httpd.apache.org/docs/trunk/upgrading.html
  • Remove "NameVirtualHost" lines; they do nothing (since 2.3.11) and are going away.

Update MediaWiki

General info: MediaWiki Manual: Upgrading

This is updating the Mediawiki code (PHP, etc.), not the database.

You probably want to make a backup first. I already have daily MySQL backups, so I just do this:

  • cp -pR /usr/local/www/mediawiki /tmp/mediawiki_backup

The new installation actually shouldn't clobber your old LocalSettings or anything else; the backup is just in case. However, any extensions probably need to be reinstalled because they're often tied to a specific version of MediaWiki.

This updates php (+related), imagemagick (+related), freetype (+related)

  • portmaster -P www/mediawiki

Assuming the above went well:

  • make sure there's nothing special in /usr/local/www/mediawiki/UPGRADE
  • cd /usr/local/www/mediawiki/maintenance/
  • php update.php

Manually install appropriate versions of all of the extensions mentioned in LocalSettings.php. Assuming there are no changes required in LocalSettings.php, this just involves unzipping them into the Extensions directory. The site where you get the extensions has installation instructions.

Blank pages after upgrading PCRE

In February 2014, after upgrading PCRE to 8.34 or higher, Mediawiki versions prior to 1.22.1 will serve up articles with empty content. This is due to a change in PCRE 8.34 that necessitates a patch to Mediawiki and a cache purge.

Symptoms:

  • empty content when viewing pages, but edit boxes have the content
  • HTTP error log shows these messages:
    PHP Warning: preg_match_all(): Compilation failed: group name must start with a non-digit at offset 4 in /usr/local/www/mediawiki/includes/MagicWord.php on line 876
    PHP Warning: Invalid argument supplied for foreach() in /usr/local/www/mediawiki/includes/MagicWord.php on line 877

For reference:

  • Here's the Mediawiki bug report
  • Here's the patch (sorta) - I had to just copy-paste the $it and $group lines into /usr/local/mediawiki/includes/MagicWord.php around line 706 (exact spot varies), replacing the old $group line.

The fix takes effect immediately, but it doesn't affect cached pages, which will probably be any pages that were visited by anyone during the time the problem was happening. If you know what all these pages are, you can purge their cached copies one by one if you visit each one while logged in and load the page with ?action=purge appended to the URL. Obviously, this is not convenient if most of your wiki is affected.

Instead, I did a mass purge by using the PurgeCache extension to do it. This required creating the /usr/local/mediawiki/extensions/PurgeCache folder and installing 4 files into it. Then I had to go to my user rights page at Special:UserRights/myusername and add myself to the developer group (which is deprecated, incidentally; another alternative would be to change the extension's code to require sysop group instead). Finally, I visited Special:PurgeCache and clicked the button to finish the cache purge.

Update tt-rss

Via web interface

Updating tt-rss can be done from within the web interface, when logged in as Admin. Of this will mean the port is out of date, but I wanted to try it to see if it works. It does, but in the future I think I'll just use the port to update it.

First, make a backup:

  • cp -pR /usr/local/www/tt-rss /usr/local/www/tt-rss.`date -j "+%Y%m%d"`

Now give tt-rss write permission:

  • chgrp www /usr/local/www
  • chmod g+w /usr/local/www /usr/local/www/tt-rss

It will make its own backup. The update will be a fresh installation in the tt-rss directory. When the update is done, copy your themes and any other customized files over from the backup. I'd undo the permission change as well:

  • chmod g-w /usr/local/www /usr/local/www/tt-rss*

This might be a good time to check to see if your themes also need to be updated.

Follow the instructions below to merge config.php changes and update the database.

Via ports

You can use portmaster on it like normal. However, it will probably cause some PHP and its modules to update, and it will overwrite the old tt-rss installation. It does leave your config.php alone, but it's up to you to merge in any changes from config.php-dist.

To do an interactive merge:

  • mv config.php config.php.old
  • sdiff -d -w 100 -osdiff -d -w 100 -o config.php config.php-dist config.php.old

Now edit config.php, and set SINGLE_USER_MODE to true. Visit the site and see if you're prompted to do a database upgrade. If so, click through.

If everything is working, restart the feed update daemon:

  • /usr/local/etc/rc.d/ttrssd restart

Edit config.php to set SINGLE_USER_MODE back to false, and test again.

Fresh install via ports

My PHP upgrade (see below) obliterated my old tt-rss installation, but thankfully left the old config file and themes behind. Here's what I did:

  • portmaster www/tt-rss - installs php56-pcntl, php56-curl, php56-xmlrpc, php56-posix - Now you have a not-quite-up-to-date snapshot...good enough for now, but you have to use git to stay current. :/
  • copy config.php from old installation BUT SET SINGLE_USER_MODE or you'll get an access level error on login
  • install latest clean-greader theme
  • visit the installation in a web browser - "FEED_CRYPT_KEY requires mcrypt functions which are not found."
  • portmaster security/php56-mcrypt
  • service apache24 restart
  • visit the installation in a web browser - follow prompt to perform updates
  • unset SINGLE_USER_MODE
  • visit again and make sure it works
  • service ttrssd restart

Update PHP

This was how I did the PHP upgrade from 5.4 to 5.6 (roughly):

  • pkg delete '*php5*' - this deletes mediawiki and tt-rss too
  • cd /usr/ports/www/mediawiki && make config - I disabled ImageMagick
  • for php56 config: xcache is the only speedup option that works with 5.6 (no pecl or whatever the other one is). I enabled it
  • portmaster www/mediawiki
  • follow instructions to copy xcache.ini to where it goes. I set an admin username and pw hash in it.
  • portmaster www/mod_php56
  • portmaster www/php56_hash - needed for mediwiki logins to work, but wasn't installed for some reason
  • cd /usr/local/www/mediawiki/maintenance
  • php update.php - didn't work at first because it wasn't using AdminSettings.php. Solution= in LocalSettings.php require_once("AdminSettings.php");
  • service apache24 restart
  • see above for tt-rss

Upgrade to pkgng

In November 2013, I decided to upgrade from the stock pkg_install tools to the new pkgng, aka pkg. I followed the instructions in the announcement and all went well, except I had to write to the author of that announcement to learn that he meant to write enabled: yes instead of enabled: "yes". If you include the quotes, the pkg command will warn about the value not being a boolean.

pkgng replaces the pkg_install tools, including pkg_create, pkg_add, and pkg_info. It doesn't remove them from your system; you just have to remember not to use them. Putting WITH_PKGNG=yes in your /etc/make.conf tells portmaster and other tools to use the new tool, pkg, which has a number of subcommands, e.g. pkg info.

Incompatibility with portmaster

I was hoping to also use packages when I upgrade my ports, but as of mid-December 2013, running portmaster with the -P or --packages option results in a warning: Package installation support cannot be used with pkgng yet, it will be disabled.

HTTPS support

Apache comes with HTTPS support (SSL) disabled by default. It's not too hard to enable, but configuration does require some effort, especially for a public server with name-based virtual hosts (i.e., serving different websites with different configurations as directed by the HTTP "Host:" header in incoming requests).

Upgrade OpenSSL

FreeBSD comes with libssl (OpenSSL) 0.9.x, which only supports TLS 1.0. You can get decent protection with that, but it's better to use OpenSSL 1.x and get TLS 2.0 and 3.0 support, which makes it a lot easier to have "perfect" forward secrecy. All you have to do is install the security/openssl port and then anything you compile that needs openssl will use the updated libs.

It's safe to build things like Apache, curl, and Spamassassin using the stock libssl and then rebuild them later after you upgrade libssl.

Get a certificate

To support HTTPS, your server needs an SSL certificate (cert). For a public server you don't want to use a self-signed cert; nobody will install it into their browser/OS's certificate store, and even if they do, their browser may still warn about how crappy the security is—the cipher may be strong, but no one can vouch for the cert's authenticity and trust. It's hard to explain, but it's kind of like how in journalism, a news outlet is unreliable if they don't publish corrections. A self-signed cert can't be revoked, for example if the server's private key is disclosed, but a "real" cert signed by a Certificate Authority (CA) can be.

To get a certificate, generally speaking, you have to:

  1. generate a private key (basically a random number + optional passphrase to encrypt it)
  2. use the private key to generate a Certificate Signing Request (CSR)
  3. submit the CSR to a Certificate Authority (CA).

Usually you have to pay the CA some money, and they have to do some kind of verification that you are a valid point of contact for the domain. The simplest, "basic" or "Class 1" type of verification is they send a code to (e.g.) hostmaster@example.org (example.org actually being whatever domain you're seeking a cert for), and if you paste the code into a form on their website, they know you saw the email and they'll issue you a cert.

Of course if you are trying to do this on the cheap, you want a free cert, and doing a web search for free SSL certificate will get you lots of results, but mostly they will be only for services which offer free SSL certificates for S/MIME. These are specialized certificates for signing or encrypting email messages before they are sent. S/MIME certs can't be used for web servers, or for encrypting email an SMTP server's traffic.

Some CAs allow you to have them generate the private key and CSR for you. I don't recommend doing that, because it's better to know that only you have your private key and that the key and the CSR were generated on computers you control. So just generate your own key and CSR, and copy-paste that into the CA's web form.

Think about the security of your private key

If anyone ever gets a copy of your private key and they know (or can easily guess) the passphrase you used to encrypt it, then your key and all certs associated with it should be considered compromised. So, think about where you are storing the private key. How secure is that computer it's on? Is the passphrase written down somewhere? Is it easy to guess if someone has access to your other files? Hopefully it's not stored in plain text on the same box!

If your key is ever compromised, you have to revoke the certificates that were signed with it. Your CA should have a process for doing that and they shouldn't charge extra for it.

Generate a private key

  • openssl genrsa -out ssl.key 2048

Some considerations:

  • Use a passphrase? No. This would make it more secure, but then you'd have to enter it every time Apache is started or sent a SIGHUP.
  • How many bits? Some tutorials say 1024, but 2048 is pretty standard now, so use 2048. More bits means more CPU cycles needed for encryption, so I'm hesitant to use 4096 (my server is running on old hardware), lest it slow things down too much. However, I've read that encryption overhead really isn't that high, even on busy servers, so maybe it's no big deal to use 4096.

Generate a CSR

  • openssl req -new -key server.key -out server.csr -sha1

SHA1 is crackable now, so you need to use SHA256; see https://community.qualys.com/blogs/securitylabs/2014/09/09/sha1-deprecation-what-you-need-to-know

You'll be prompted to enter country, state/province, locality, organization name, organizational unit name—these can be blank or filled in as you wish (although I found that I had to enter country/state/locality). Then you enter the Common Name (CN), which should be the "main" domain name the cert is for. If it's a wildcard cert, the CN would be something like "*.example.com". Otherwise it needs to match the main domain name that people will be using to access the server. Some registrars might want you to use a FQDN ("something.example.com").

You'll also be prompted to enter an email address that will be in the cert; I suggest something that works but isn't too revealing, like root or hostmaster at your domain.

If prompted for a challenge password, this is a password that you create and give to the CA. They can then use it in order to verify you in future interactions with them. It's a way to protect against someone impersonating you when they talk to the issuer.

Optional company name is probably for if your company is requesting the cert on behalf of someone else. I just leave it blank.

Now you have a text file, server.csr, the contents of which you'll copy-paste or otherwise upload to the CA.

Get your cert from the CA

Turn off any ad or script blockers when accessing the CA's website.

If you're new there, you'll probably have to verify an email address (doesn't matter what it is, as long as you can get the code they send you) and paste a validation code into a form. They may also try to make your browser accept an SSL cert for authenticating you. Think of it as an extra-special cookie.

Once you're in, follow whatever procedures they have laid out. Probably they will want to validate your domain. This requires them sending a validation code to an email address at the domain in question (e.g. hostmaster@yourdomain.com), and then you tell them what code you received. After the domain is validated, you give them your CSR text.

I found that when working with one particular CA to get a non-wildcard cert, if I generated a CSR for a bare domain (example.org), the CA required that I enter a FQDN (something.example.org). The resulting cert contained something.example.org as the CN and example.org as a Subject Alternative Name (meaning, "also good for this domain"). It worked fine.

If everything goes well, the CA will give you your requested cert (e.g. ssl.crt), along with root and intermediate certs (maybe in one file). You will need to tell Apache where all of these files are. The CA probably has instructions on their site.

Configure Apache HTTPD

Put the cert files wherever you want, just make sure that the folder and files are readable only by root.

Edit httpd.conf and uncomment the line that says something like

Include etc/apache24/extra/httpd-ssl.conf

Edit extra/httpd-ssl.conf and comment out the <VirtualHost: _default_:443>...</VirtualHost> and its contents (aside from what's already commented-out). Here's the general idea of what you should add instead:

Enable name-based virtual host configs:

NameVirtualHost *:443

Set up an alias for a desired access-log format. I want to use the standard "combined" format, with a couple of SSL-specific details appended:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x" combined_plus_ssl

For each of the domains named in the certificate, you need a virtual host entry. You are mainly duplicating your httpd-vhosts.conf entries, but for port 443, with SSL stuff added, and (probably) different log file locations and formats.

In HTTPS, the client first establishes an unencrypted connection to port 443 at the server's IP address. This is just in order to negotiate encryption. Once this is done, the actual HTTP request is decrypted and handled.

When using a non-SNI-capable browser, the initial, unencrypted connection does not have a hostname/domain (identifying the desired website) associated with it, so the first <VirtualHost> entry that matches the IP address and port 443 will be handling it, and the certificate defined in that entry must be the same as the one in the entry that will be handling the actual HTTP request. The HTTP-handling entry could be the same entry as the initial connection-handling entry, or it could be separate.

When the connection comes from an SNI-capable browser, then it will probably have a hostname/domain, so an SNI-capable server (like Apache 2.2.12 and up, built with OpenSSL 0.9.8j and up, which is standard since mid-2009) will simply use the <VirtualHost> entry with the corresponding ServerName for both the initial connection and the actual HTTP request.

Once the encrypted connection is established, the rest of the communication is ordinary HTTP requests that arrive encrypted. These are sent to port 443 at the same IP address, and are decrypted and handled like normal (but with these configs, not the ones for port 80). Each request should contain a Host: header to specify the hostname/domain. So the first <VirtualHost> entry does double-duty, handling the HTTP service for one of these domains:

# This one will be for any encrypted requests on *:443 with
# "Host: example.com:443" headers.
#
# By virtue of being first, this entry also applies to the initial connection on
# *:443 (for non-SNI clients), and encrypted requests on *:443 with a missing or
# unrecognized Host header.
#
VirtualHost *:443>
    ServerName example.com:443
    ServerAdmin root@example.com
    SSLEngine on
    SSLProtocol all -SSLv2 -SSLv3
    SSLCertificateKeyFile "/path/to/server.key"
    SSLCertificateFile "/path/to/ssl.crt"
    SSLCACertificateFile "/path/to/root.crt"
    Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
    DocumentRoot "/path/to/whatever"
    CustomLog "/path/to/whatever" combined_plus_ssl
    ErrorLog "/path/to/whatever"
    LogLevel notice
</VirtualHost>

SSLCACertificateFile is for the CA root cert. Some CAs issue intermediate certs in a file separate from the root cert. In that case, you'd have to refer to that intermediate cert file as SSLCertificateChainFile in your Apache config. But if the root and intermediate cert are in a single file, you just use SSLCACertificateFile by itself.

You're going to want LogLevel to be notice or higher, because there's a lot of noise in the SSL info-level messages.

Of course * can be replaced with a specific IP address, if you want.

The rest of the VirtualHost entries are only for the specific Host: headers. Make sure there's one for each name the cert is good for.

# This one will be for any encrypted requests on *:443 with
# "Host: foo.example.com:443" headers, and for the initial
# connection on *:443 by SNI-capable clients wanting foo.example.com.
#
# Don't forget to mirror any non-SSL, non-log changes here
# with the corresponding *:80 entry in httpd-vhosts.conf.
#
<VirtualHost *:443>
    ServerName foo.example.com:443
    ServerAdmin root@example.com
    SSLEngine on
    SSLProtocol all -SSLv2 -SSLv3
    SSLCertificateKeyFile "/path/to/server.key"
    SSLCertificateFile "/path/to/ssl.crt"
    SSLCACertificateFile "/path/to/root.crt"
    Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
    DocumentRoot "/path/to/whatever"
    CustomLog "/path/to/whatever" "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
    ErrorLog "/path/to/whatever"
    LogLevel notice
</VirtualHost>

Ref (non-SNI): https://wiki.apache.org/httpd/NameBasedSSLVHosts Ref (SNI): https://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI

It's a good idea to have entries for any other domains hosted on the same IPs. That is, every HTTP website should have some kind of HTTPS service as well. This has a couple of ramifications:

  • You will have to keep the :443 <VirtualHost> entries in sync with the :80 ones.
  • When people try to access the HTTPS versions of sites that the certificate isn't valid for, they'll get warnings in their browsers. If they choose to accept the certificate anyway, what do you want to do? In my opinion, the best thing to do is redirect to an HTTPS site that the certificate is good for, or if there's no such option, just redirect to the regular HTTP site. In either case, their initial request should still be handled with SSL:
# People might try to access our hosted domains via HTTPS (port 443)
# even if we don't have certs for those domains. They'll get the default
# cert (as per the first VirtualHost entry) and despite the warning
# in their browser, the user has the option of accepting it.
# We want to redirect them to the appropriate, probably non-SSL location.
#
<VirtualHost *:443>
    ServerName non-ssl-host.example.com:443
    ... the usual SSL stuff goes here ...
    DocumentRoot "whatever"
    Redirect / http://non-ssl-host.yourdomain.org/
    CustomLog "whatever" combined_plus_ssl
    ErrorLog "whatever"
    LogLevel notice
</VirtualHost>

See if it works

  • Visit your web sites with https URLs and see what happens.
  • Use a third-party SSL checker like SSLShopper's SSL Checker.
  • If you use Firefox or Chrome, install the HTTPS Everywhere extension, create a custom ruleset for it, then see if you get redirected to the https URL when you try to visit the http URL of your web site.

Something else to check for is mixed content. Ideally, an HTTPS-served page shouldn't reference any HTTP-served scripts, stylesheets, images, videos, etc.; browsers may warn about it. Replace any http: links in your HTML with relative links (for resources on the same site) or https: links (for resources that are verifiably available via HTTPS). For example, in MediaWiki's LocalSettings.php, I had to change $wgRightsUrl and $wgRightsIcon to use https: URLs. There may still be some external resources which are only available via HTTP, but if they're outside your control, there's nothing you can do about that.

Improvements

HSTS

HSTS is a lot like HTTPS Everywhere, but it comes standard in modern browsers. You enable HSTS on the server just by having it send a special header in its HTTPS responses. The header tells HSTS-capable browsers to only use HTTPS when accessing the site in the future. In the main configuation, you need

  • LoadModule headers_module modules/mod_headers.so

On my system, this was already enabled. Then, in the <VirtualHost> section for each HTTPS site (not regular HTTP!), you need

  • Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"

Test it in your browser by disabling HTTPS Everywhere (if installed), then visit the HTTPS website, then try to visit the HTTP version of the site. The browser should change the URL back to use HTTPS automatically.

POODLE attack mitigation

The attack forces a downgrade to SSLv3, which is now too weak to be relied upon. You have to disable SSLv3. IE6 users will be locked out.

  • SSLProtocol all -SSLv2 -SSLv3

CRIME attack mitigation

This is an easy one. Just ensure TLS compression is not enabled. It normally isn't enabled, but just in case:

  • SSLCompression off

BEAST attack mitigation

  • requires combo of SSLProtocol and SSLCipherSuite
  • use TLS 1.1 or higher, or (for TLS 1.0) only use RC4 cipher
  • you can't specify "RC4 for TLS 1.0, but no RC4 for TLS 1.1+" in mod_ssl
  • TLS 1.1+ can still be downgraded to 1.0 by a MITM!
  • RC4 has vulnerabilities, too!
  • Apache 2.2 w/mod_ssl is normally built w/OpenSSL 0.9.x, supporting TLS 1.0 only!

But wait, read on...

Perfect forward secrecy

Cipher suites using Diffie-Hellman key exchange ("DH") provide forward secrecy. "Perfect" forward secrecy (PFS) is an enhanced version of this policy.

  • it ensures session keys can't be cracked if private key is compromised
  • it requires ephemeral Diffie-Hellman key exchange ("EDH" or "DHE"), optionally with Elliptic Curve cryptography ("ECDHE" or "EECDH") to reduce overhead
  • ECDHE requires Apache 2.3.3+! (it's OK to leave it listed in 2.2's config though)
  • browser support varies

The basic config of

  • SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5

gives me a pretty nice report with lots of green "Forward Secrecy" results on the Qualys SSL Labs analyzer.

This gets more complicated if you want to mitigate the BEAST attack. There are suggestions [2][3] for dealing with it through the use of SSLCipherSuite directives that prioritize RC4 if AES isn't available. However, this is not good for Apache 2.2, because you'll probably end up disabling forward secrecy for everyone.

Reference for SSLCipherSuite: here (click). It may help to know that on the command line, you can do openssl ciphers -v followed by the same parameters you give in the SSLCipherSuite directive, and it will tell you what ciphers match.

It's best to beef up your Diffie-Hellman setup by following the instructions at https://weakdh.org/sysadmin.html. In a nutshell:

  • cd /etc/ssl
  • openssl dhparam -out dhparams.pem 2048

After a nice long wait for that to finish, make Apache use the new params and a new order of cipher suites. In /usr/local/etc/apache24/extra/httpd-ssl.conf:

  • SSLOpenSSLConfCmd DHParameters "/etc/ssl/dhparams.pem"
  • SSLHonorCipherOrder on
  • SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

SMTP Authentication and STARTTLS support in Sendmail

FreeBSD comes with sendmail installed in the base system, with support for STARTTLS (the SMTP command that sets up encryption) disabled. You will get encryption support if you just tell sendmail where to find certificates.

To also do authentication—i.e. where authorized users log in to your server to have it deliver mail for them—you need to rebuild sendmail with support for the SASL libraries. Every time there is an update to the base system's sendmail, you'll have to do the rebuild in /usr/src, which can be a pain. Some administrators choose to install sendmail from the ports collection to make this easier, but that port is really mainly intended for helping upgrade sendmail installations on older systems.

Set up authentication

In order to set up authentication, rebuild sendmail with support for the SASL libraries. Just follow the instructions in the SMTP Authentication section of the FreeBSD Handbook.

Where the handbook refers to editing freebsd.mc or the local .mc, I made sure to use /etc/mail/`hostname`.mc.

The handbook also suggests increasing the log level from its default of 9, but doesn't say how. You do it by adding this to the .mc file:

dnl log level
define(`confLOG_LEVEL', `13')dnl
As mentioned previously, any time you update the OS with freebsd-update, you will probably overwrite your custom builds of system binaries. So for example, if you have built Sendmail with SASL2, it will be clobbered by freebsd-update, so you will have to rebuild it!

At this point, do the make install restart as directed, just to make sure nothing broke. sendmail should start up quietly. Maybe send yourself a test message and make sure you can still receive mail OK. Feel free to tail the mail log and see what it says.

The outcome here, if I understand correctly, is this:

  • SMTP clients (email programs) can now ask to interact with my server as a local user (with their login password), in order to use my server as a relay for their outbound mail. (Your ISP may not appreciate this; I know mine insists that people use the ISP's own relays exclusively.)

Previously, to allow relaying, I had set up each user's home IP address as a valid RELAY in /etc/mail/access. Obviously authentication is better. However...

I think the handbook's advice, as given, is rather dangerous, because it says to override the default authentication methods, which the documentation currently says are GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5. The handbook's advice omits KERBEROS_V4, which is no big deal, but then it also adds the LOGIN authentication method, which transmits the username and password in the clear (well, base64-encoded), which is a big deal if the connection isn't yet encrypted.

Regardless of whether you leave LOGIN (or PLAIN) in there, but especially if you do, I strongly suggest you also add this to the .mc file:

dnl SASL options:
dnl f = require forward secrecy
dnl p = require TLS before LOGIN or PLAIN auth permitted
dnl y = forbid anonymous auth mechanisms
define(`confAUTH_OPTIONS',`f,p,y')dnl

While you're in there, throw KERBEROS_V4 back in and change the comments to be more informative:

dnl authentication will be allowed via these mechanisms:
define(`confAUTH_MECHANISMS', `GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl

dnl relaying will be allowed for users who authenticated via these mechanisms:
TRUST_AUTH_MECH(`GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl

Set up encryption

Public key encryption via the STARTTLS command won't work until you tell sendmail where the private key and certificates are. So, in the .mc file add the following:

dnl certificate and private key paths for STARTTLS support
define(`confCACERT_PATH', `/etc/mail/certs')dnl
define(`confCACERT', `/etc/mail/certs/CAcert.pem')dnl
define(`confSERVER_CERT', `/etc/mail/certs/MYcert.pem')dnl
define(`confSERVER_KEY', `/etc/mail/certs/MYkey.pem')dnl
define(`confCLIENT_CERT', `/etc/mail/certs/MYcert.pem')dnl
define(`confCLIENT_KEY', `/etc/mail/certs/MYkey.pem')dnl

Also create the referenced directory and files. They must be readable only by owner, and symlinks are OK:

  • mkdir -m 700 /etc/mail/certs
  • cd /etc/mail/certs
  • ln -s /actual_path_to_CA_cert/ssl.crt MYcert.pem
  • ln -s /actual_path_to_my_private_key/server.key MYkey.pem
  • ln -s /actual_path_to_CA_root_cert/root.crt CAcert.pem

Now make install restart and tail the mail log, watching for errors. Also run the tests at checktls.com ... for me, everything worked on the first try!

Outcomes:

  • SMTP clients (email programs and mail relays) that connect to my server anonymously in order to hand off mail for my users (or for other domains I relay to) can now request encryption and communicate securely.
  • My SMTP server, when connecting to a remote SMTP server in order to deliver mail from my users, can now request encryption and communicate securely.

Certificate limitations

I have read that not all certificates work for STARTTLS.

Apparently you can run openssl x509 -noout -purpose -in path_to_your_cert to see what "purposes" your cert is approved for. Here's the output for my AlphaSSL wildcard cert:

Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : Yes
SSL server CA : No
Netscape SSL server : Yes
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : No
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No

I suspect "SSL client : Yes" is crucial.

Client certificate verification

What good is encryption if the client is being impersonated by some Man-in-the-Middle (MITM) who is choosing his favorite cipher and sending you his public key? The way to defend against this is to verify the client. But you also have to figure out what to do with unverifiable clients.

Certificates for trusted clients or their CAs are required on the server

Unless you configured the server not to request a certificate from the client, it will ask for one, and it will tell the client "I'm prepared to accept a certificate signed with these CA root certificates..." The certs it will accept are the root certs and self-signed certs that are in the confCACERT file, plus those that you have symlinks for in the confCACERT_PATH directory. The client will then decide whether it wants to offer the server a cert at all.

The Sendmail Installation and Operation Guide says you can't have the server accepting too many root certs, because the TLS handshake may fail. But it doesn't say how many is too many; it just says only include the CA cert that signed your own certs, plus any others you trust. I take this to mean that I'm not supposed to include the whole the Mozilla root cert bundle, i.e. /usr/local/share/certs/ca-root-nss.crt, as installed by the security/ca_root_nss port (which is maybe already on the system, as it is needed by curl, SpamAssassin, gnupg, etc.).

To verify a client cert signed by a CA, you need a copy of the CA root certificate and any intermediate certificates to be on the system. As many certs as you want can be concatenated together in the confCACERT file, or they can be in separate files represented by symlinks, named for the cert's hash, in the confCACERT_PATH directory. If intermediate certificates are present, they can be in separate files, too, or they can have the higher-level certs, on up to the root, concatenated to them in one file; e.g. GoDaddy has a gd_bundle.crt file available for this purpose, with the contents of gd_intermediate.crt followed by the contents of gd-class2-root.crt; the hash will be for the first cert in the bundle (i.e., the lowest-level intermediate cert).

To verify a self-signed client cert, I believe you need a copy of the self-signed cert to be on the system; it is treated like a CA root cert. It can live in the file with the root certs or it can have a symlink in the confCACERT_PATH directory.

Here is how to generate the appropriate symlink (but replace both instances of cert.crt with the path to the appropriate file):

  • ln -s cert.crt `openssl x509 -noout -hash < cert.crt`.0

Verification results

When your server receives email via an encrypted connection, you will see something like this in the Received: headers:

  • (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)

Here are the possible client certificate verification codes:

  • verify=OK means that the verification succeeded.
  • verify=NOT means that the server didn't ask for a cert, probably because it was configured not to.
  • verify=NO means that the server asked for a cert, but the client didn't provide one, or it didn't provide the intermediate and root certs along with the client cert. Maybe the client isn't configured to send the whole bundle, or it doesn't have a client cert to provide, or maybe the client didn't like the list of acceptable CA root certs the server offered. This code is not cause for concern unless you were expecting to be able to verify that client because you have the necessary certs installed.
  • verify=FAIL means that the server asked for a cert, and the client provided one that couldn't be verified. Maybe it's expired, or the server doesn't have the necessary root and intermediate certs, or the certs it has don't have signatures that match those presented, or one of the certs presented is listed in the CRL file (if any).
  • Other codes are NONE (no STARTTLS command issued), SOFTWARE (TLS handshake failure), PROTOCOL (SMTP error), and TEMP (temporary, unspecified error).

By default, Sendmail doesn't care what the code is; it'll proceed with the transaction anyway, if possible. Depending on your needs, you can configure Sendmail to react to these codes.

Even if there is no verification, the transaction is still encrypted; there is just no certainty of the identity of the connecting host.

The biggest caveat

On a public MX host, you're required (by RFC 3207) not to reject relaying through unencrypted connections, so you can't really do much verification of clients.

A client may present you with valid certs, but if you don't have the necessary certs installed to verify them, that's your fault, not the client's. And you can't say that verify=FAIL is reason to refuse delivery, but then accept any other non-verify=OK codes. I mean, what's to stop the client from just trying again and deliberately triggering one of the other codes? e.g. it could not use STARTTLS at all, or not send a cert.

So really there's only a few choices (pick one):

  • Don't attempt verification at all.
  • Attempt verification of a handful of trusted hosts & root CAs, but only for informational purposes.
  • Require encrypted connections, attempt verification of a handful of trusted hosts & root CAs, and disallow relaying for those that don't get verify=OK. This is not an option for public servers.

Sendmail encryption related documentation of note

Official Sendmail docs:

  • /usr/share/sendmail/cf/README - massive doc explaining .mc & .cf files and all the options therein. Current copy online at MIT.
  • /usr/share/sendmail/cf/cf/knecht.mc - Eric Allman's .mc file with many interesting things in it
  • (this is where it ends up on FreeBSD:) /usr/src/contrib/sendmail/doc/op/op.me - troff source for the Sendmail Installation and Configuration Guide. On FreeBSD there's a Makefile in that folder, so you can cd /usr/src/contrib/sendmail/doc/op/ && make op.ps op.txt op.pdf to generate PostScript, ASCII (ugly), and PDF copies. A recent but not-quite-current PDF copy is at sendmail.com. No one else seems to have it online, and very few sites refer to it, yet it's indispensable!

FreeBSD-specific:

  • /etc/mail/README - Mainly just explains how to work around an issue with getting it to work with jails.
  • SMTP Authentication - outdated chapter of the FreeBSD Handbook. The instructions for rebuilding Sendmail are good for enabling STARTTLS and AUTH, at least, but these docs need work.

Useful guides:

Cyrus SASL-related:

TLS/SSL and certificates:

Anti-spam

Enable a caching DNS server

FreeBSD 9 and lower comes with BIND preconfigured to be a caching DNS server listening on 127.0.0.1, but it is disabled by default. If you enable it, you'll reduce traffic to/from other DNS servers. You can also configure it to bypass your ISP's DNS server, if that's what you normally use, in order to use certain RBL services to combat spam (see next section).

On FreeBSD 10 and up, the DNS server is called Unbound, and by default it is configured as a local caching resolver. See https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-dns.html for how to enable it.

On FreeBSD 9 and lower, with BIND:

  • Add named_enable="YES" to /etc/rc.conf
  • Uncomment the forwarders section of /etc/named/named.conf and put your ISP's nameserver addresses in it.
  • In /etc/resolv.conf, replace your ISP's nameserver addresses with 127.0.0.1 (or—and I haven't tested this—if you use DHCP, add prepend domain-name-servers 127.0.0.1; to the /etc/dhclient.conf section for your network interface; see the dhclient.conf man page).
  • service named onestart

Test it:

  • nslookup freebsd.org

The first line of output should say Server: 127.0.0.1 and the lookup should succeed.

At this point you are just forwarding; anytime you look up a host not yet in the cache, you are asking your ISP's nameserver to request it for you. It might pull it from its own cache.

Support RBLs

You are probably combatting spam by using RBLs, which rely on DNS queries to find out if a given IP is a suspected spammer.

Some RBL services block queries from the major ISPs, because they generate too much traffic. URIBL is an example of such a service.

To deal with this, after enabling the caching & forwarding DNS service as described above, you now need to disable forwarding for just the RBL domains. Then your server will query those domains' DNS servers directly. It will work if you just add something like this to named.conf (then restart named):

/* Let RBLs see queries from me, rather than my ISP, by disabling forwarding for them: */

// RBLs that are disabled but mentioned in my sendmail config
zone "blackholes.mail-abuse.org" { type forward; forward first; forwarders {}; };

// RBLs that are enabled in my sendmail config
zone "bl.score.senderscore.com" { type forward; forward first; forwarders {}; };
zone "zen.spamhaus.org" { type forward; forward first; forwarders {}; };

// RBLs that are probably enabled in SpamAssassin
zone "multi.uribl.com" { type forward; forward first; forwarders {}; };
zone "dnsbl.sorbs.net" { type forward; forward first; forwarders {}; };
zone "combined.njabl.org" { type forward; forward first; forwarders {}; };
zone "activationcode.r.mail-abuse.com" { type forward; forward first; forwarders {}; };
zone "nonconfirm.mail-abuse.com" { type forward; forward first; forwarders {}; };
zone "iadb.isipp.com" { type forward; forward first; forwarders {}; };
zone "bl.spamcop.net" { type forward; forward first; forwarders {}; };
zone "fulldom.rfc-ignorant.org" { type forward; forward first; forwarders {}; };
zone "list.dnswl.org" { type forward; forward first; forwarders {}; };

Secondary and tertiary MX records

To have a place for your inbound mail to queue when your host is down, it's common to set up a secondary MX that stores-and-forwards. The downside is that it probably attracts a lot of spam which doesn't get caught because the secondary MX accepts all mail for your domain, and your host, when it comes back online, will accept all mail from that secondary.

One way to partially work around this problem is to make your primary MX host also be a tertiary MX. Some spammers will favor the tertiary, but real mailers will try the secondary first.

If the spammers get wise, you can try using a different hostname for the tertiary MX, so long as its A record points to the same IP.

Spamassassin

It's tempting to run every piece of incoming mail through Spamassassin, but you don't want to block messages that "look spammy" such as bounces and mailing list traffic (especially the spamassassin users' mailing list). I haven't figured out how to do it right, so I am only running Spamassassin as a user, via procmail, and my .procmailrc is not running administrative messages (including bounces) and mailing list traffic through Spamassassin.

Enable DCC

DCC will score any bulk mail higher. This means legit mailing list posts will also be scored higher, so using it means you have to be vigilant about whitelisting or avoiding scanning mailing list traffic.

To enable DCC checking, just uncomment the appropriate line in /usr/local/etc/mail/spamassassin/v310.pre.

The feature requires allowing UDP traffic in & out on port 6277. See and http://www.rhyolite.com/dcc/FAQ.html#firewall-ports2. I didn't need to do anything special to enable this with my particular firewall configuration, but if I did, I would probably put an ipfw allow rule in /etc/rc.local.

Enable SPF...or not

SPF is for catching forged email. See http://www.akadia.com/services/spf.html. The idea is that email from a user at a particular domain will get a "pass" from the SPF checker if the mail comes from an IP address that the domain owner has approved via a special entry in their DNS records. Otherwise it gets a "fail" or "softfail" or whatever.

Getting a "pass" is worthless (Spamassassin score adjustment of zero) because so many spammers use custom domains that they control and set SPF records for. A "fail" is worth about 0.9. It's great for catching a certain kind of spam, as long as the domain owner keeps their SPF records updated and legitimate email from that domain always goes direct from the approved servers to the recipient's servers.

I've read several anti-SPF rants that seem to say there are other reasons SPF is "harmful," but they don't really explain the problems very well, and they don't seem to be based on empirical evidence of "harm."

Honestly, I very rarely get any SPF passes and even fewer fails. It's just wasting time to enable SPF checking in Spamassassin, so after enabling it for a while (in init.pre), I turned it off.

I look at SPF more as just protection for legitimate domains. Non-spam domains with SPF info in their DNS records are far less likely to be forged by spammers. So for my domain, I set up a TXT record that says "v=spf1 a mx -all". Now spammers are less likely to use my domain in the envelope sender address.

NTP

For things to run smoothly, especially email, you need to keep your system's clock (the one that keeps track of the actual date/time) in sync with the outside world.

stock/classic/reference/Unix ntpd

When setting up FreeBSD via sysinstall, you're asked to pick a server for ntpdate to use.[4] This sets ntpdate_hosts="..." and ntpdate_enable="YES" in /etc/rc.conf, which causes /etc/rc.d/ntpdate to run at boot time to set the clock once, with immediate results. You're expected to make it run daily, if not more often, via a script or cron job.

But wait, ntpdate is deprecated! See its man page. You're now supposed to run ntpd, which adjusts the time gradually, and can connect to remote NTP servers as often as it needs to.

Ideally, you have it running as a daemon, enabled via ntpd_enable"YES" in /etc/rc.conf. You could also or instead do a clock sync on-demand via ntpd -q, same as running ntpdate. Either way, it uses /etc/ntp.conf for its configuration and mainly just says what servers to check.

See below for a reason you may not want to run the daemon.

If you don't like running the daemon, just set up a root cron job to run /usr/sbin/ntpd -q -x > /dev/null every 4 hours or so.

Rudimentary instructions for getting ntpd as a daemon are in the FreeBSD Handbook, but they don't cover security issues very well. In particular, you need this in your /etc/ntp.conf:

# 2013-2014: close off hole that lets people use the server to DDoS
#
# 1. disable monitoring
#
disable monitor
#
# 2. before 'server' lines, use the following, as per
#    https://www.team-cymru.org/ReadingRoom/Templates/secure-ntp-template.html
#
# by default act only as a basic NTP client
restrict -4 default nomodify nopeer noquery notrap
restrict -6 default nomodify nopeer noquery notrap
# allow NTP messages from the loopback address, useful for debugging
restrict 127.0.0.1
restrict ::1

The reason you need this is because this particular ntpd implementation listens on UDP port 123 all the time, exposing it to the outside world. It needs to keep that port open in order to work at all. You should try to reduce this exposure risk via restrict lines in ntp.conf; these can be used to say that only traffic purporting to be from certain hosts (the servers you want to get time info from) will be acknowledged. It wouldn't hurt to duplicate this info in your firewall rules. But I had bad luck with geographically nearby NTP servers going offline over the years, so I much prefer to use the pool.ntp.org hostnames as the servers to sync to. These pools, by nature, are always changing their IP addresses. Thus you can't use "restrict" lines or firewall rules to whitelist these IPs, because you don't know what they are. Therefore, it's better to not run the stock ntpd in daemon mode unless you only use static IPs in your ntp.conf server lines.

So instead of running stock ntpd, I run openntpd from the ports collection. It doesn't have this problem.

OpenNTPD

After searching in vain for a way to use the pools securely, I gave up and decided to run openntpd from ports. This is much, much simpler.

  • portmaster net/openntpd
  • In /etc/rc.conf:
ntpd_enable="NO"
openntpd_enable="YES"
openntpd_flags="-s"
  • You can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).
  • Logging is same as for the stock ntpd; just put this in /etc/syslog.conf:
ntp.*                                   /var/log/ntpd.log
  • touch /var/log/ntpd.log
  • service syslogd reload
  • Log rotation is probably desirable. Put this in /etc/newsyslog.conf:
/var/log/ntpd.log           644  3     *    @T00    JCN
  • service ntpd stop (obviously not necessary if you weren't running the stock ntpd before)
  • service openntpd start

You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:

ntp engine ready
set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s)
peer x.x.x.x now valid
adjusting local clock by -0.046633s

Spamassassin config

See above re:

Here are some notes about the rest of my Spamassassin config.

v320.pre

There are a bunch of plugins that come with Spamassassin. Many are enabled by default via loadplugin lines in the various *.pre files. I enabled a couple more by uncommenting some more loadplugin lines in /usr/local/etc/mail/spamassassin/v320.pre.

This one is what allows the shortcircuit rules to work:

loadplugin Mail::SpamAssassin::Plugin::Shortcircuit

...You also have to create shortcircuit.cf; see below.

This one is an optimization to compile rules to native code:

loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody

shortcircuit.cf

Some basic rules for the Shortcircuit plugin come with SpamAssassin. These rules can be extended by using the sample Shortcircuiting Ruleset in the SA wiki.

spamc.conf

I feel it's a good idea to avoid scanning extremely large messages. Yes, this gives spammers a back door, but scanning incoming email shouldn't be something that cripples the server. If I had a faster box with more RAM, I would set this limit much higher.

# max message size for scanning = 600k
-s 600000

local.cf

I want suspected spam to be delivered to users as regular messages, not as attachments to a Spamassassin report:

report_safe 0

If a message matches the whitelists, just deliver it without doing a full scan:

shortcircuit USER_IN_WHITELIST       on
shortcircuit USER_IN_DEF_WHITELIST   on
shortcircuit USER_IN_ALL_SPAM_TO     on
shortcircuit SUBJECT_IN_WHITELIST    on

Likewise, if a message matches the blacklists, just call it spam:

shortcircuit USER_IN_BLACKLIST       on
shortcircuit USER_IN_BLACKLIST_TO    on
shortcircuit SUBJECT_IN_BLACKLIST    on

I've never seen BAYES_00 or BAYES_99 mail that was misclassified, so avoid a full scan on that as well:

shortcircuit BAYES_99                spam
shortcircuit BAYES_00                ham

My users get to have their own ~/.spamassassin/user_prefs files:

allow_user_rules 1

My users probably aren't sending out spam to other users on my system:

# probably not spam if it originates here (default score 0)
score NO_RELAYS 0 -5 0 -5

Custom rule: among my users (mainly me), I believe a message with a List-Id header is slightly less likely to be spam:

header  FROM_MAILING_LIST       exists:List-Id
score   FROM_MAILING_LIST       -0.1

Custom rule: a message purporting to be from a mailing list run by my former employer is much less likely to be spam:

header  FOURTHOUGHT_LIST        List-Id =~ /<[^.]+\.[^.]+\.fourthought\.com>/
score   FOURTHOUGHT_LIST        -5.0

Custom rule: a message from an IP resolving to anything.ebay.com can be whitelisted:

# maybe not ideal, but at one point I missed some legit eBay mail
whitelist_from_rcvd *.ebay.com ebay.com

I realize these custom rules could easily let spam through, but I was desperate to avoid false positives, which I was getting when using the AWL (Auto-WhiteList plugin), which despite copious training was making a lot of ham score as spam. AWL is no longer enabled in SpamAssassin by default, and I sure as hell am not using it ever again. So I probably don't need these rules anymore. I leave them in, though, because they remind me how to set up this kind of thing.

Before I enabled a caching, non-forwarding DNS server, the URIBL rules weren't working, so I had to disable the lookups by setting the URIBL scores to zero. Since I set up the non-forwarding DNS server, my URIBL queries are coming from my own IP rather than my ISP's DNS servers, so it works properly. Therefore, I've got this commented out now; it's just here for future reference:

#score URIBL_BLACK 0
#score URIBL_RED 0
#score URIBL_GREY 0
#score URIBL_BLOCKED 0

Bounces generated by my own MTA for mail that originates on my network will get scored lower (i.e., more likely to be ham) due to the NO_RELAYS rule. Without additional configuration, though, any bounces generated by remote MTAs, regardless of whether it's for mail originating on my network or originating elsewhere, will not be recognized or handled differently than any other inbound mail. Remotely generated bounces for mail originating elsewhere is called backscatter and is not actually spam, although it often does contain spam or viruses, and is generally unwanted.

In order to distinguish bounces from regular mail, and to distinguish the bounces for mail originating here from backscatter (not really score it differently, by default), I need to activate the VBounce plugin. This plugin is already enabled in v320.pre, but it doesn't actually do anything until it is told what the valid relays are for local outbound mail. So here I tell it what to look for in the Received headers to know that it's a bounce for mail that originated from my network:

whitelist_bounce_relays chilled.skew.org

Bounces should then hit the ANY_BOUNCE_MESSAGE rule plus one of these:

  • BOUNCE_MESSAGE = MTA bounce message
  • CHALLENGE_RESPONSE = Challenge-Response message for mail you sent
  • CRBOUNCE_MESSAGE = Challenge-Response bounce message
  • VBOUNCE_MESSAGE = Virus-scanner bounce message

You can customize your scoring for these if you want, or in your .procmailrc you can specially handle scanned mail with these tags appearing in the X-Spam-Status header. However, I thought I shouldn't be sending obvious bounces to Spamassassin at all...hmm.

Personal user_prefs

After saving and separating my ham and spam for a couple of months, then looking at the scores, I'm pretty confident that ham addressed to me is very unlikely to score much higher than 3, so I lowered the spam threshold from 5 to 4:

require_hits 4

Similarly, I'm finding ham addressed to me is very unlikely to be in the BAYES_50_BODY to BAYES_99_BODY range, so I bump those scores up a bit:

# defaults for the following are 0.001, 1.0, 2.0, 3.0, 3.5
score BAYES_50_BODY 2.0
score BAYES_60_BODY 2.5
score BAYES_80_BODY 3.0
score BAYES_95_BODY 4.0
score BAYES_99_BODY 4.5

I thought the default score for a Spamcop hit was pretty low, so I bumped it up:

# default for the following is 1.3, as of January 2014
score RCVD_IN_BL_SPAMCOP_NET 3.0

(I already have my MTA checking Spamcop, but it only looks at the IP connecting to me, so it lets through spam that originated at a Spamcop-flagged IP but that was relayed through a non-flagged intermediary.)

Remember the down-scoring I do for mailing lists in the site config? Well, if that mailing list traffic is addressed to me, I want to score it even lower:

score   FOURTHOUGHT_LIST        -100.0
score   FROM_MAILING_LIST       -1.0

I also have a bunch of whitelist_from entries for my personal contacts.

Finally, I want a Spamassassin report added to the headers of every message I get, so I know why it scored as it did:

add_header  all Report _REPORT_

Git

I already have git installed on a different host, so this is more just my notes on how to use it.

Initial setup

This creates ~/.gitconfig and populates it with reasonable defaults (but set user.name and user.email to real values; I made mine match what I use on GitHub, for consistency):

git config --global user.name "yourname"
git config --global user.email "youremail"
git config --global core.excludesfile ~/.gitignore
git config --global core.autocrlf input
git config --global core.safecrlf true
git config --global push.default simple
git config --global branch.autosetuprebase always
git config --global color.ui true
git config --global color.status auto
git config --global color.branch auto

Create a ~/.gitignore and tell it what file globs to ignore (so they won't be treated as part of your project):

# ignore files ending with .old, .orig, or ~
*.old
*.orig
*~

Create a place for your repos:

  • mkdir ~/git_repos

Use a separate SSH keypair for GitHub

You don't have to use your main SSH identity for GitHub.

  • Generate a new keypair: ssh-keygen -t dsa -C "you@yourhost.com"
  • When prompted for a file in which to save the key, make it create a new file: ~/.ssh/id_dsa_github
  • Set a passphrase when prompted.
  • Copy-paste the content of ~/.ssh/id_dsa_github into the SSH keys section of your settings on GitHub.
  • In your ~/ssh/config, add this:
Host github.com
  IdentityFile ~/.ssh/id_dsa_github
  • See if it works: ssh -T git@github.com

You should get a message that you've successfully authenticated.

Customizations

Here are some of my favorite customizations.

Things in root's crontab

This is not a complete list, of course.

# every 5 minutes, run mrtg to update the network traffic graphs
*/5 * * * * env LANG=C /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg

# on the 8th day of every month, update the GeoIP databases
50 0 8 * *  /usr/local/bin/geoipupdate.sh > /dev/null 2>&1

# every hour, clear out the PHP session cache
10 * * * *  /usr/local/adm/clean_up_php_sessions > /dev/null 2>&1

Things in my crontab

This is not a complete list, either.

# nightly learning of spam misfiled as ham by SpamAssassin (I put it in ~/mail/notham)
35 04 * * * [ -s /home/mike/mail/notham ] && /usr/local/bin/sa-learn --spam --mbox /home/mike/mail/notham > /dev/null 2>&1 && rm /home/mike/mail/notham

/usr/local/adm/clean_up_php_sessions

PHP defaults to storing sessions in /tmp or /var/tmp, and has a 1 in 1000 chance of running a garbage collector upon the creation of a new session. The garbage collector will expire ones that are more than 24 minutes old. You can increase the probability of it running, but still you have to wait for a new session to be created, so it's really only useful for sites which get a new session created every 24 minutes or less. Otherwise, you're better off (IMHO) just running a script to clean out the stale session files. I am using the script below, invoked from root's crontab every 20 minutes:

#!/bin/sh
echo "Deleting the following stale sess_* files:"
find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc)
find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc) -delete

Of course you can store session data in a database if you want, and the stale file problem is avoided altogether. But then that's just one more thing that can break.

/etc/periodic.conf

After installing the sa-utils port:

  • daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org"
  • daily_sa_quiet="yes"

To ensure verbose output of the daily run of "pkg audit" (so you can see the vulnerability details):

  • daily_status_security_pkgaudit_quiet="NO"

/etc/ssh/sshd_config

These affect the behavior of the SSH server.

  • Port ##### - Change the listening port from 22 to something else! Eliminates brute-force attacks.
  • GatewayPorts yes - Enable public access to reverse tunnels.
  • ClientAliveInterval 30 - Every 30 seconds, check for client response.
  • ClientAliveCountMax 99999 - Don't disconnect an unresponsive client until 99999 checks fail.

~/.ssh/config

These are settings to use when connecting with the ssh client to remote hosts (replace ###### as appropriate):

CheckHostIP yes
Compression yes
Host my.otherhost.com
  Port #####
Host github.com
  IdentityFile ~/.ssh/id_dsa_github

/etc/sysctl.conf

These are changes to default kernel settings in multi-user mode.

  • net.inet.tcp.keepidle=540000 - Probably no longer necessary if using the sshd_config customizations above, but just in case, every 9 minutes (instead of every 2 hours), send something to every TCP client, so crappy routers between us and them don't think we've disconnected. I used this because I found that some routers had a 10-minute connection timeout, which kept killing my SSH sessions and tunnels.

/etc/make.conf

These are extra environment variables enabled during 'make' runs, and usually are specially checked-for by the Makefiles in the FreeBSD ports.

##
## options for 'make buildworld' and components thereof:
##
# when building top(1), only allocate enough space to handle 75 users, rather than 10000
TOP_TABLE_SIZE=151
# for code with processor-specific optimizations (e.g. base OpenSSL), optimize for my Pentium III CPU (SSE+MMX)
CPUTYPE?=       pentium3
# when building sendmail(1), enable STARTTLS support (requires security/cyrus-sasl2 port and additional configuration)
SENDMAIL_CFLAGS=-I/usr/local/include/sasl -DSASL
SENDMAIL_LDFLAGS=-L/usr/local/lib
SENDMAIL_LDADD=-lsasl2
# I don't remember why the next two lines got commented out!
#SENDMAIL_MC=   /etc/mail/chilled.skew.org.mc
#SENDMAIL_SUBMIT_MC=    /etc/mail/chilled.skew.org.submit.mc

##
## options for building ports:
##
# I am using the new package system (required now)
WITH_PKGNG=yes

# my ancient network card does not support IPv6, so don't bother with IPv6 in networking ports
WITHOUT_IPV6=yes

# networking ports like curl(1) should support HTTPS
WITH_HTTPS=yes

# don't build or install GUIs, including X11 libraries
WITHOUT_GUI=yes
WITHOUT_X11=yes
OPTIONS_UNSET=X11

# don't waste time on tests when building ImageMagick
WITHOUT_IMAGEMAGICK_TESTS=yes

# when building FreeType, enable subpixel rendering capability (disabled by default due to patent issues)
WITH_LCD_FILTERING=yes

# Berkeley DB 5 was the highest version supported by devel/apr1 (Apache dependency) in mid-2014.
# This can be removed if db6 is installed (but the apr1 port will not install it for you).
WITH_BDB_VER=5

# As required by the /usr/ports/UPDATING entry 20141209:
# ensure Linux ports use emulators/linux_base-c6 (CentOS userland), not linux_base-f10 (Fedora 10, unsupported)
OVERRIDE_LINUX_BASE_PORT=c6
OVERRIDE_LINUX_NONBASE_PORTS=c6

/etc/syslog.conf

Anything going to /dev/console should also go to a regular file:

console.*                   /var/log/console.log

If logged in, some users get important messages in their ttys:

!-sm-mta
*.notice                    root,mike
!sm-mta
*.warning                   root,mike
!*

/etc/rc.local

Here is a bare-bones /etc/rc.local which does nothing:

#!/bin/sh
#
# This file is a deprecated but convenient method of launching additional
# "local daemons" (or just running any other startup tasks) at the very
# end of the boot process. See the rc(8) manual page.
#

# load variables from rc.conf (comment out if not needed)
#
#if [ -z "${source_rc_confs_defined}" ]; then
#    if [ -r /etc/defaults/rc.conf ]; then
#        . /etc/defaults/rc.conf
#        source_rc_confs
#    elif [ -r /etc/rc.conf ]; then
#        . /etc/rc.conf
#    fi
#fi

It runs at the end of the boot process to load any custom daemons and to run anything else you want. Its output is prefaced with "Starting additional daemons: " though, so you want to keep its output to a minimal list, and all on one line if possible. For example:

# load additional firewall rules
rules="/etc/ipfw.rules"
[ -f $rules ] && echo -n " $rules" && . $rules

# make encrypted swap file
mkswap="/etc/mkswap.sh"
[ -f $mkswap ] && echo -n " $mkswap" && . $mkswap

~/.cshrc

See my tcsh configuration files document.

~/.login

See my tcsh configuration files document.

nano configuration files

See my nano configuration files document.