User:Mjb/FreeBSD 8 additional software

From Offset
< User:Mjb
Revision as of 03:56, 17 August 2018 by Mjb (talk | contribs) (Set up encryption)
Jump to navigationJump to search

Quite a bit of this is outdated now because it is all from when I was running FreeBSD 8. However, I keep it around because it is the only place I have documented some of these details.

Set up Git

I already have git installed on a different host, so this is more just my notes on how to use it.

Initial setup

This creates ~/.gitconfig and populates it with reasonable defaults (but set user.name and user.email to real values; I made mine match what I use on GitHub, for consistency):

git config --global user.name "yourname"
git config --global user.email "youremail"
git config --global core.excludesfile ~/.gitignore
git config --global core.autocrlf input
git config --global core.safecrlf true
git config --global push.default simple
git config --global branch.autosetuprebase always
git config --global color.ui true
git config --global color.status auto
git config --global color.branch auto

Create a ~/.gitignore and tell it what file globs to ignore (so they won't be treated as part of your project):

# ignore files ending with .old, .orig, or ~
*.old
*.orig
*~

Create a place for your repos:

  • mkdir ~/git_repos

Use a separate SSH keypair for GitHub

You don't have to use your main SSH identity for GitHub.

  • Generate a new keypair: ssh-keygen -t dsa -C "you@yourhost.com"
  • When prompted for a file in which to save the key, make it create a new file: ~/.ssh/id_dsa_github
  • Set a passphrase when prompted.
  • Copy-paste the content of ~/.ssh/id_dsa_github into the SSH keys section of your settings on GitHub.
  • In your ~/ssh/config, add this:
Host github.com
  IdentityFile ~/.ssh/id_dsa_github
  • See if it works: ssh -T git@github.com

You should get a message that you've successfully authenticated.

Install MySQL

  • Install the databases/mysql##-server port. This will also install the client; no need to install the client port separately.
  • Close off access to the server from outside of localhost by making sure this is in /var/db/mysql/my.cnf:
[mysqld]
bind-address=127.0.0.1

Also, if you have enabled an ipfw firewall, you can put similar ipfw rules somewhere like /etc/rc.local. For example (replace X with your IP address in all 3 places, and make sure to actually run these commands if you're not gonna reboot):

# only allow local access to MySQL
ipfw add 3000 allow tcp from X to X 3306
ipfw add 3001 deny tcp from any to X 3306
  • Make sure mysql_enable="YES" is in /etc/rc.conf, then run /usr/local/etc/rc.d/mysql-server start. MySQL is now running but is insecure; you need to set the root password and delete the anonymous accounts as described in the manual at http://dev.mysql.com/doc/refman/5.1/en/default-privileges.html ... however, if you're also restoring data from backups, you can skip this step since your backups hopefully include the 'mysql' database which has all the user account data in it!

You need to set a root password for MySQL. This is one way (where PWD is the password you want to use):

  • mysqladmin -u root password PWD

Or, if you already have a backup of the 'mysql' database, such as made by my script, you can just load that backup, because the usernames and passwords are stored in there.

To restore from backups:

  1. Unzip the latest backup file in /usr/backup/mysql/daily (see above for the script that puts the backup files there).
  2. Run mysql < backupfile.sql to load the data, including user tables & passwords.
  3. Run mysql_upgrade to verify that the data is all OK to use with this version of MySQL.
  4. mysql -u root should now give an error for lack of password. Time to install MediaWiki?

Update MySQL

Oracle is now calling it MySQL Community Server.

Don't update more than one minor version at a time (e.g., the docs say go from 5.5 to 5.6 before going to 5.7).

The actual databases shouldn't be affected by a minor version bump of MySQL. But of course, you should still consider making a fresh backup first:

  • mysqldump -E -uXXXXX -pYYYYY --all-databases | bzip2 -c -q > /tmp/mysql-backup.sql.bz2

Here's what I did when going from 5.5 to 5.6. I'm not sure it was really necessary to stop the 5.5 server and delete the 5.5 packages, but it seemed like a good idea in case there would be conflicts.

  • service mysql-server stop
  • pkg delete -f mysql\* (if you don't do the -f it will also try to remove dependencies like mediawiki)
  • portmaster -d databases/mysql-server56 (the client's dependencies now include Python and libxml, so it takes a while)
  • service mysql-server start
  • mysql_upgrade -uXXXXX -pYYYYY
  • service mysql-server restart

You should make sure MediaWiki and any other MySQL-dependent apps still work after doing this.

MySQL backup script

This simple script I wrote keeps a week's worth of daily backups of the database. I run it every day via cron.

MYSQLUSER and MYSQLPASSWD must be set to real values, not XXXXX and YYYYY; and DUMPDIR and ARCHIVEDIR must point to writable directories.

If there's a more secure way of handling this, let me know!

#!/bin/sh

DUMPDIR=/usr/backup/mysql/daily
ARCHIVEDIR=/usr/backup/mysql/weekly
MYSQLUSER=root
MYSQLPASSWD="put_your_password_here"
# Monday=1, Sunday=7
ARCHIVEDAY=7

DATE=`/bin/date "+%Y%m%d"`
BZIP=/usr/bin/bzip2
DUMPER=/usr/local/bin/mysqldump
DAYOFWEEK=`/bin/date "+%u"`
CHECKER=/usr/local/bin/mysqlcheck

# Create an empty file named '.offline' in the document root folder of each
# website that needs to not be accessing the database during the backup.
# This assumes the web server config or index scripts in those folders will
# temporarily deny access as appropriate.
touch /usr/local/www/mediawiki/.offline
touch /usr/local/www/tt-rss/.offline

set clobber
if [ -d ${DUMPDIR} -a -w ${DUMPDIR} -a -x ${DUMPER} -a -x ${BZIP} ] ; then
  OUTFILE=${DUMPDIR}/mysql-backup-${DATE}.sql.bz2
  echo "Backing up MySQL databases to ${OUTFILE}..."
  # -E added 2013-04-17 to get rid of warning about events table not being dumped
  ${DUMPER} -E -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --add-drop-database | ${BZIP} -c -q > ${OUTFILE}
else
  echo "There was a problem with ${DUMPDIR} or ${DUMPER} or ${BZIP}; check existence and permissions."
  exit 1
fi

if [ -d ${ARCHIVEDIR} ] ; then
  if [ ${DAYOFWEEK} -eq ${ARCHIVEDAY} ] ; then
    echo "It's archive day. Archiving ${OUTFILE}..."
    /bin/cp -p ${OUTFILE} ${ARCHIVEDIR}
    echo "Deleting daily backups older than 1 week..."
    /usr/bin/find ${DUMPDIR} -mtime +7 -exec rm -v {} \;
  fi
else
  echo "Today would have been archive day, but ${ARCHIVEDIR} does not exist."
  exit 1
fi

if [ -x ${CHECKER} ] ; then
  echo "Checking & repairing tables..."
  ${CHECKER} -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --medium-check --auto-repair --silent
  echo "Optimizing tables..."
  ${CHECKER} -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --optimize --silent
  echo "Done."
fi

# Remove the '.offline' files
rm -f /usr/local/www/mediawiki/.offline
rm -f /usr/local/www/tt-rss/.offline

One downside of this script is that even on my small database, it takes a little while to run, like 15 minutes or so. While it's running, the database tables are locked (read-only). You don't want your database-backed websites to be doing stuff until the dump is finished. So I temporarily take those sites offline by doing a touch .offline to create an empty file named ".offline" in each of the sites' root folders, and then when the backup is done, there's a rm .offline for each one. In those site folders is a "site temporarily offline for backups" HTML page and a .htaccess with the following:

ErrorDocument 503 /.site_offline.html
RewriteEngine On
RewriteCond %{DOCUMENT_ROOT}/\.offline -f
RewriteCond %{REQUEST_URI} !/\.site_offline\.html
RewriteRule .* - [R=503,L]

Really there's no reason to write the temporary .offline file in the server root; you could put it in /tmp or wherever, and make the first RewriteCond look for it there. You could also hard-code the path in that RewriteCond directive; %{DOCUMENT_ROOT} may not point where you want if you're using Alias directives.

Install tt-rss

The www/tt-rss port is Tiny Tiny RSS, a web-based feed reader I'm now using instead of Google Reader.

  • portmaster www/tt-rss
  • mysql -pYYYYY
    • create database ttrss;
    • connect ttrss;
    • source /usr/local/www/tt-rss/schema/ttrss_schema_mysql.sql;
    • quit;
  • edit /usr/local/www/tt-rss/config.php:
    • DB_USER needs to be root (I didn't bother creating a special user...)
    • DB_NAME needs to be ttrss
    • DB_PASS needs to be whatever's appropriate for DB_USER
    • DB_PORT needs to be 3306
    • SELF_URL_PATH needs to be whatever is appropriate
    • FEED_CRYPT_KEY needs to be 24 random characters
    • REG_NOTIFY_ADDRESS needs to be a real email address
    • SMTP_FROM_ADDRESS needs to at least have your real domain
  • cp /usr/local/share/tt-rss/httpd-tt-rss.conf /usr/local/etc/apache22/Includes/
  • /usr/local/etc/rc.d/apache22 reload
  • visit http://yourdomain/tt-rss/
Startup failed
Tiny Tiny RSS was unable to start properly. This usually means a misconfiguration
or an incomplete upgrade. Please fix errors indicated by the following messages:

  FEED_CRYPT_KEY requires mcrypt functions which are not found.

The solution, after making sure mcrypt isn't mentioned in /usr/ports/www/tt-rss/Makefile:

  • portmaster security/php5-mcrypt
  • /usr/local/etc/rc.d/apache22 restart
  • visit http://yourdomain/tt-rss/ and you should get a login screen. u: admin, p: password.
  • Actions > Preferences > Users. Select checkbox next to admin, choose Edit. Enter new password in authentication box.

The password is accepted, but subsequent accesses to all but the main Preferences page result in "{"error":{"code":6}}". There's nothing in the ttrss_error_log table in the database. Apache error log shows a few weird things, but nothing directly related:

File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php
File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php
File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php
File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php

Logging in again seems to take care of it, unless I change the password again. This only affects the admin user.

Create a new user, and login as that user. Subscribe to some feeds. Feeds won't update at all unless you double-click on their names, one by one.

Now the update daemon:

  • In /etc/rc.conf, add ttrssd_enable="YES"
  • /usr/local/etc/rc.d/ttrssd start

Feeds should now update automatically, as per the interval defined in Actions > Preferences > Default feed update interval. Minimum value for this, though, is 15 minutes. This can also be overridden on a per-feed basis.

Themes are installed by putting uniquely named .css files (and any supporting files & folder) in tt-rss's themes/ directory. I decided to try clean-greader for a Google Reader-like experience. It works great, but I'm not happy with some of it, especially its thumbnail-izing of the first image in the feed content, so I use the Actions > Preferences > Customize button and paste in this CSS:

/* use a wider view for 1680px width screens, rather than 1200px (see also 1180px setting below) */
#main { max-width: 1620px; }

/* preferences help text should be formatted like tt-rss.css says, and make it smaller & italic */
div.prefHelp {
    color : #555;
    padding : 5px;
    font-size: 80%;
    font-style: italic;
}

/* tidy up feed title bar, especially to handle feed icons, which come in wacky sizes */
img.tinyFeedIcon { height: 16px; }
div.cdmFeedTitle {
background-color: #eee;
padding-left: 2px;
height: 16px; }
a.catchup {
  padding-left: 1em;
  color: #cdd;
  font-size: 75%;
  font-style: italic;
}

/* Narrower left margin (44px instead of 71px), greater width (see also #main above) */
.claro .cdm.active .cdmContent .cdmContentInner,
.claro .cdm.expanded .cdmContent .cdmContentInner {
  padding: 0 8px 0 50px;
  max-width: 1180px;
}

/* main feed image is often real content, e.g. on photo blogs, so don't shrink it */
  .claro .cdm.active .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img,
  .claro .cdm.active .cdmContent .cdmContentInner p:first-of-type img,
  .claro .cdm.active .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child,
  .claro .cdm.expanded .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img,
  .claro .cdm.expanded .cdmContent .cdmContentInner p:first-of-type img,
  .claro .cdm.expanded .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child {
  float: none;
  margin: 0 0 16px 0 !important;
  max-height: none;
  max-width: 100%;
}

/* scroll bars are too hard to see by default */
::-webkit-scrollbar-track {
  background-color: #ccc;
}
::-webkit-scrollbar-thumb {
  background-color: #ddd;
}

Install py-fail2ban

After installing the port, create /usr/local/etc/fail2ban/action.d/bsd-route.conf with the following contents:

# Fail2Ban configuration file
#
# Author: Michael Gebetsroither, amended by Mike J. Brown
#
# This is for blocking whole hosts through blackhole routes.
#
# PRO:
#   - Works on all kernel versions and as no compatibility problems (back to debian lenny and WAY further).
#   - It's FAST for very large numbers of blocked ips.
#   - It's FAST because it Blocks traffic before it enters common iptables chains used for filtering.
#   - It's per host, ideal as action against ssh password bruteforcing to block further attack attempts.
#   - No additional software required beside iproute/iproute2
#
# CON:
#   - Blocking is per IP and NOT per service, but ideal as action against ssh password bruteforcing hosts

[Definition]

# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
actionstart =


# Option:  actionstop
# Notes.:  command executed once at the end of Fail2Ban
# Values:  CMD
#
actionstop =


# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =


# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionban   = route -q add <ip> 127.0.0.1 <routeflags>


# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#
actionunban = route -q delete <ip> 127.0.0.1

[Init]

# Option:  routeflags
# Note:    Space-separated list of flags, which can be -blackhole or -reject
# Values:  STRING
blocktype = -blackhole

Also create /usr/local/etc/fail2ban/jail.local. In it, you can override examples in jail.conf, and add your own:

[apache-badbots]
enabled = true
filter = apache-noscript
action = bsd-route
         sendmail-buffered[name=apache-badbots, lines=5, dest=root@yourdomain]
logpath = /var/log/www/*/*error_log

[apache-noscript]
enabled = true
filter = apache-noscript
action = bsd-route
         sendmail-whois[name=apache-noscript, dest=root@yourdomain]
logpath = /var/log/www/*/*error_log

[sshd]
enabled = true
filter = bsd-sshd
action = bsd-route
         sendmail-whois[name=sshd, dest=root@yourdomain]
logpath = /var/log/auth.log
maxretry = 6

[sendmail]
enabled = true
filter = bsd-sendmail
action = bsd-route
         sendmail-whois[name=sendmail, dest=root@yourdomain]
logpath = /var/log/maillog

Be sure to replace yourdomain. Check for errors with the command fail2ban-client -d | grep '^ERROR' || echo no errors.

In /etc/rc.conf, add the line fail2ban_enable="YES" and then run /usr/local/etc/rc.d/fail2ban start

Disable any cron jobs that were doing work that you expect fail2ban to now be doing.

Check your log rotation scripts to make sure they create new, empty files as soon as they rotate the old logs out. Apache HTTPD, for example, won't create a new log until there's something to put in it, and if fail2ban notices the logfile is missing for too long, it will disable the jail.

Because you're going to get mail from fail2ban@yourdomain, set up an alias for this account so that any bounces (e.g. due to network problems) will go to the alias.

Install Rootkit Hunter

Install the program and set up its database:

  • portmaster security/rkhunter – this will install wget as well.
  • rehash
  • rkhunter --propupd
  • rkhunter --update

Run the program once to see if it finds anything:

  • rkhunter --check

As per the Rootkit Hunter FAQ, assuming nothing looks wrong, but you got warnings about script replacement, generate a list of SCRIPTWHITELIST entries for you to manually add to the appropriate section of /usr/local/etc/rkhunter.conf:

  • awk -F"'" '/replaced by a script/ {print "SCRIPTWHITELIST="$2}' /var/log/rkhunter.log

There are more examples at the bottom of the FAQ.

Beware if you are running rkhunter from an interactive shell and have aliased 'ls' and/or have it configured for color, the unexpected output may not be parsed properly during the 'filesystem' tests, and you will get bogus warnings about hidden directories:

[04:04:54] Warning: Hidden directory found: ?[1m?[38;5;6m/usr/.?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/usr/.^[[39;49m^[[m' (No such file or directory)
[04:04:54] Warning: Hidden directory found: ?[1m?[38;5;6m/usr/..?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/usr/..^[[39;49m^[[m' (No such file or directory)
[04:04:55] Warning: Hidden directory found: ?[1m?[38;5;6m/etc/.?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/etc/.^[[39;49m^[[m' (No such file or directory)
[04:04:55] Warning: Hidden directory found: ?[1m?[38;5;6m/etc/..?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/etc/..^[[39;49m^[[m' (No such file or directory)

If it happens to you, do whatever is needed to get 'ls' to behave normally, or add filesystem to the DISABLE_TESTS line in /usr/local/etc/rkhunter.conf.

The port adds a script to /usr/local/etc/periodic/security. You can enable it by adding to /etc/periodic.conf:

daily_rkhunter_update_enable="YES"
daily_rkhunter_update_flags="--update --nocolors"
daily_rkhunter_check_enable="YES"
daily_rkhunter_check_flags="--cronjob --rwo"

Alternatively, you can just add this to root's crontab:

# run Rootkit Hunter every day at 1:06am
06 01 * * * /usr/local/bin/rkhunter --cronjob --update --rwo

Upgrade Perl and Perl modules

Instructions for major and minor version updates are separate entries in /usr/ports/UPDATING. One thing they didn't make at all clear is that (prior to 2013-06-12), perl-after-upgrade is supposed to be run after updating modules; it won't find anything to do otherwise. So, to go from 5.12 to 5.16, I did this:

  1. portmaster -o lang/perl5.16 lang/perl5.12
  2. portmaster p5-
  3. perl-after-upgrade -f
  4. Inspect the old version's folders under /usr/local/lib/perl5 and /usr/local/lib/perl5/site_perl. Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.

When there's a perl patchlevel update (e.g. 5.16.2 to 5.16.3), UPDATING might tell you to upgrade everything Perl-related via portmaster -r perl. I'm not a big fan of this. Somehow, pretty-much everything on the system is tied to Perl, including Apache, MediaWiki, you name it. I don't understand why.

It is possible to upgrade just Perl itself, and the modules:

  1. portmaster perl
  2. portmaster p5-
  3. perl-after-upgrade -f

perl-after-upgrade doesn't exist anymore. Starting with Perl 5.12.5 / 5.14.3 / 5.16.3, they dropped the patchlevel from the folder names in /usr/local/lib/perl5 and /usr/local/lib/perl5/site_perl, and the installer handled it automatically.

Update MediaWiki

General info: MediaWiki Manual: Upgrading

This is updating the Mediawiki code (PHP, etc.), not the database.

You probably want to make a backup first. I already have daily MySQL backups, so I just do this:

  • cp -pR /usr/local/www/mediawiki /tmp/mediawiki_backup

The new installation actually shouldn't clobber your old LocalSettings or anything else; the backup is just in case. However, any extensions probably need to be reinstalled because they're often tied to a specific version of MediaWiki.

This updates php (+related), imagemagick (+related), freetype (+related)

  • portmaster -P www/mediawiki

Assuming the above went well:

  • make sure there's nothing special in /usr/local/www/mediawiki/UPGRADE
  • cd /usr/local/www/mediawiki/maintenance/
  • php update.php

Manually install appropriate versions of all of the extensions mentioned in LocalSettings.php. Assuming there are no changes required in LocalSettings.php, this just involves unzipping them into the Extensions directory. The site where you get the extensions has installation instructions.

Blank pages after upgrading PCRE

In February 2014, after upgrading PCRE to 8.34 or higher, Mediawiki versions prior to 1.22.1 will serve up articles with empty content. This is due to a change in PCRE 8.34 that necessitates a patch to Mediawiki and a cache purge.

Symptoms:

  • empty content when viewing pages, but edit boxes have the content
  • HTTP error log shows these messages:
    PHP Warning: preg_match_all(): Compilation failed: group name must start with a non-digit at offset 4 in /usr/local/www/mediawiki/includes/MagicWord.php on line 876
    PHP Warning: Invalid argument supplied for foreach() in /usr/local/www/mediawiki/includes/MagicWord.php on line 877

For reference:

  • Here's the Mediawiki bug report
  • Here's the patch (sorta) - I had to just copy-paste the $it and $group lines into /usr/local/mediawiki/includes/MagicWord.php around line 706 (exact spot varies), replacing the old $group line.

The fix takes effect immediately, but it doesn't affect cached pages, which will probably be any pages that were visited by anyone during the time the problem was happening. If you know what all these pages are, you can purge their cached copies one by one if you visit each one while logged in and load the page with ?action=purge appended to the URL. Obviously, this is not convenient if most of your wiki is affected.

Instead, I did a mass purge by using the PurgeCache extension to do it. This required creating the /usr/local/mediawiki/extensions/PurgeCache folder and installing 4 files into it. Then I had to go to my user rights page at Special:UserRights/myusername and add myself to the developer group (which is deprecated, incidentally; another alternative would be to change the extension's code to require sysop group instead). Finally, I visited Special:PurgeCache and clicked the button to finish the cache purge.

Update tt-rss

Via web interface

Updating tt-rss can be done from within the web interface, when logged in as Admin. Of this will mean the port is out of date, but I wanted to try it to see if it works. It does, but in the future I think I'll just use the port to update it.

First, make a backup:

  • cp -pR /usr/local/www/tt-rss /usr/local/www/tt-rss.`date -j "+%Y%m%d"`

Now give tt-rss write permission:

  • chgrp www /usr/local/www
  • chmod g+w /usr/local/www /usr/local/www/tt-rss

It will make its own backup. The update will be a fresh installation in the tt-rss directory. When the update is done, copy your themes and any other customized files over from the backup. I'd undo the permission change as well:

  • chmod g-w /usr/local/www /usr/local/www/tt-rss*

This might be a good time to check to see if your themes also need to be updated.

Follow the instructions below to merge config.php changes and update the database.

Via ports

You can use portmaster on it like normal. However, it will probably cause some PHP and its modules to update, and it will overwrite the old tt-rss installation. It does leave your config.php alone, but it's up to you to merge in any changes from config.php-dist.

To do an interactive merge:

  • mv config.php config.php.old
  • sdiff -d -w 100 -osdiff -d -w 100 -o config.php config.php-dist config.php.old

Now edit config.php, and set SINGLE_USER_MODE to true. Visit the site and see if you're prompted to do a database upgrade. If so, click through.

If everything is working, restart the feed update daemon:

  • /usr/local/etc/rc.d/ttrssd restart

Edit config.php to set SINGLE_USER_MODE back to false, and test again.

Fresh install via ports

My PHP upgrade (see below) obliterated my old tt-rss installation, but thankfully left the old config file and themes behind. Here's what I did:

  • portmaster www/tt-rss - installs php56-pcntl, php56-curl, php56-xmlrpc, php56-posix - Now you have a not-quite-up-to-date snapshot...good enough for now, but you have to use git to stay current. :/
  • copy config.php from old installation BUT SET SINGLE_USER_MODE or you'll get an access level error on login
  • install latest clean-greader theme
  • visit the installation in a web browser - "FEED_CRYPT_KEY requires mcrypt functions which are not found."
  • portmaster security/php56-mcrypt
  • service apache24 restart
  • visit the installation in a web browser - follow prompt to perform updates
  • unset SINGLE_USER_MODE
  • visit again and make sure it works
  • service ttrssd restart

Update PHP

This was how I did the PHP upgrade from 5.4 to 5.6 (roughly):

  • pkg delete '*php5*' - this deletes mediawiki and tt-rss too
  • cd /usr/ports/www/mediawiki && make config - I disabled ImageMagick
  • for php56 config: xcache is the only speedup option that works with 5.6 (no pecl or whatever the other one is). I enabled it
  • portmaster www/mediawiki
  • follow instructions to copy xcache.ini to where it goes. I set an admin username and pw hash in it.
  • portmaster www/mod_php56
  • portmaster www/php56_hash - needed for mediwiki logins to work, but wasn't installed for some reason
  • cd /usr/local/www/mediawiki/maintenance
  • php update.php - didn't work at first because it wasn't using AdminSettings.php. Solution= in LocalSettings.php require_once("AdminSettings.php");
  • service apache24 restart
  • see above for tt-rss

Upgrade to pkgng

In November 2013, I decided to upgrade from the stock pkg_install tools to the new pkgng, aka pkg. I followed the instructions in the announcement and all went well, except I had to write to the author of that announcement to learn that he meant to write enabled: yes instead of enabled: "yes". If you include the quotes, the pkg command will warn about the value not being a boolean.

pkgng replaces the pkg_install tools, including pkg_create, pkg_add, and pkg_info. It doesn't remove them from your system; you just have to remember not to use them. Putting WITH_PKGNG=yes in your /etc/make.conf tells portmaster and other tools to use the new tool, pkg, which has a number of subcommands, e.g. pkg info.

Incompatibility with portmaster

I was hoping to also use packages when I upgrade my ports, but as of mid-December 2013, running portmaster with the -P or --packages option results in a warning: Package installation support cannot be used with pkgng yet, it will be disabled.

NTP

For things to run smoothly, especially email, you need to keep your system's clock (the one that keeps track of the actual date/time) in sync with the outside world.

stock/classic/reference/Unix ntpd

When setting up FreeBSD via sysinstall, you're asked to pick a server for ntpdate to use.[1] This sets ntpdate_hosts="..." and ntpdate_enable="YES" in /etc/rc.conf, which causes /etc/rc.d/ntpdate to run at boot time to set the clock once, with immediate results. You're expected to make it run daily, if not more often, via a script or cron job.

But wait, ntpdate is deprecated! See its man page. You're now supposed to run ntpd, which adjusts the time gradually, and can connect to remote NTP servers as often as it needs to.

Ideally, you have it running as a daemon, enabled via ntpd_enable"YES" in /etc/rc.conf. You could also or instead do a clock sync on-demand via ntpd -q, same as running ntpdate. Either way, it uses /etc/ntp.conf for its configuration and mainly just says what servers to check.

See below for a reason you may not want to run the daemon.

If you don't like running the daemon, just set up a root cron job to run /usr/sbin/ntpd -q -x > /dev/null every 4 hours or so.

Rudimentary instructions for getting ntpd as a daemon are in the FreeBSD Handbook, but they don't cover security issues very well. In particular, you need this in your /etc/ntp.conf:

# 2013-2014: close off hole that lets people use the server to DDoS
#
# 1. disable monitoring
#
disable monitor
#
# 2. before 'server' lines, use the following, as per
#    https://www.team-cymru.org/ReadingRoom/Templates/secure-ntp-template.html
#
# by default act only as a basic NTP client
restrict -4 default nomodify nopeer noquery notrap
restrict -6 default nomodify nopeer noquery notrap
# allow NTP messages from the loopback address, useful for debugging
restrict 127.0.0.1
restrict ::1

The reason you need this is because this particular ntpd implementation listens on UDP port 123 all the time, exposing it to the outside world. It needs to keep that port open in order to work at all. You should try to reduce this exposure risk via restrict lines in ntp.conf; these can be used to say that only traffic purporting to be from certain hosts (the servers you want to get time info from) will be acknowledged. It wouldn't hurt to duplicate this info in your firewall rules. But I had bad luck with geographically nearby NTP servers going offline over the years, so I much prefer to use the pool.ntp.org hostnames as the servers to sync to. These pools, by nature, are always changing their IP addresses. Thus you can't use "restrict" lines or firewall rules to whitelist these IPs, because you don't know what they are. Therefore, it's better to not run the stock ntpd in daemon mode unless you only use static IPs in your ntp.conf server lines.

So instead of running stock ntpd, I run openntpd from the ports collection. It doesn't have this problem.

OpenNTPD

After searching in vain for a way to use the pools securely, I gave up and decided to run openntpd from ports. This is much, much simpler.

  • portmaster net/openntpd
  • In /etc/rc.conf:
ntpd_enable="NO"
openntpd_enable="YES"
openntpd_flags="-s"
  • You can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).
  • Logging is same as for the stock ntpd; just put this in /etc/syslog.conf:
ntp.*                                   /var/log/ntpd.log
  • touch /var/log/ntpd.log
  • service syslogd reload
  • Log rotation is probably desirable. Put this in /etc/newsyslog.conf:
/var/log/ntpd.log           644  3     *    @T00    JCN
  • service ntpd stop (obviously not necessary if you weren't running the stock ntpd before)
  • service openntpd start

You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:

ntp engine ready
set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s)
peer x.x.x.x now valid
adjusting local clock by -0.046633s

Upgrade Apache from 2.2 to 2.4

In mid-2014, Apache 2.4 became the default version in ports, and also db4 ports are deprecated. The only thing I had that was using db4 was apr (base libs needed by Apache), and it wasn't really using it, so I went ahead and just deleted the installed db4 versions, and added USE_BDB_VER=5 to my /etc/make.conf (apr can't use db6 yet).

Then I upgraded Apache to 2.4. It does require some Apache downtime and uninstalling 2.2 (!) because the 2.4 port will abort installation when it sees that some 2.2 files are in the way.

  1. remove any forcing of apache22 from /etc/make.conf
  2. build apr + apache24 from ports
  3. stop and delete apache22
  4. install apache24
  5. edit .conf files in /usr/local/etc/apache24 (see notes below)
  6. upgrade lang/php5
  7. install www/mod_php5 with same options as lang/php5 (yes, they split the Apache module into a separate port again!)
  8. 'service apache24 start' and cross your fingers
  9. in /etc/rc.conf, s/apache22_enable/apache24_enable/

Config file editing...

Every time you edit, use 'apachectl configtest' to check for problems. Some things to watch for:

  • Many modules are not enabled by default, but you probably want to enable a bunch of them, like these: include_module, deflate_module, actions_module, rewrite_module, ssl_module and socache_shmcb_module, cgi_module, userdir_module, php5_module, any proxy modules you need.
  • For the most part, you can copy-paste everything from the apache22 files, but don't include any allow/deny directives. Use the new format as explained at https://httpd.apache.org/docs/trunk/upgrading.html
  • Remove "NameVirtualHost" lines; they do nothing (since 2.3.11) and are going away.

Enable HTTPS support in Apache

Apache comes with HTTPS support (SSL) disabled by default. It's not too hard to enable, but configuration does require some effort, especially for a public server with name-based virtual hosts (i.e., serving different websites with different configurations as directed by the HTTP "Host:" header in incoming requests).

Upgrade OpenSSL

FreeBSD 8 comes with libssl (OpenSSL) 0.9.x, which only supports TLS 1.0. You can get decent protection with that, but it's better to use OpenSSL 1.x and get TLS 2.0 and 3.0 support, which makes it a lot easier to have "perfect" forward secrecy. All you have to do is install the security/openssl port and then anything you compile that needs openssl will use the updated libs.

It's safe to build things like Apache, curl, and Spamassassin using the stock libssl and then rebuild them later after you upgrade libssl.

Get a certificate

To support HTTPS, your server needs an SSL certificate (cert). For a public server you don't want to use a self-signed cert; nobody will install it into their browser/OS's certificate store, and even if they do, their browser may still warn about how crappy the security is—the cipher may be strong, but no one can vouch for the cert's authenticity and trust. It's hard to explain, but it's kind of like how in journalism, a news outlet is unreliable if they don't publish corrections. A self-signed cert can't be revoked, for example if the server's private key is disclosed, but a "real" cert signed by a Certificate Authority (CA) can be.

To get a certificate, generally speaking, you have to:

  1. generate a private key (basically a random number + optional passphrase to encrypt it)
  2. use the private key to generate a Certificate Signing Request (CSR)
  3. submit the CSR to a Certificate Authority (CA).

Usually you have to pay the CA some money, and they have to do some kind of verification that you are a valid point of contact for the domain. The simplest, "basic" or "Class 1" type of verification is they send a code to (e.g.) hostmaster@example.org (example.org actually being whatever domain you're seeking a cert for), and if you paste the code into a form on their website, they know you saw the email and they'll issue you a cert.

Of course if you are trying to do this on the cheap, you want a free cert, and doing a web search for free SSL certificate will get you lots of results, but mostly they will be only for services which offer free SSL certificates for S/MIME. These are specialized certificates for signing or encrypting email messages before they are sent. S/MIME certs can't be used for web servers, or for encrypting email an SMTP server's traffic.

Some CAs allow you to have them generate the private key and CSR for you. I don't recommend doing that, because it's better to know that only you have your private key and that the key and the CSR were generated on computers you control. So just generate your own key and CSR, and copy-paste that into the CA's web form.

Think about the security of your private key

If anyone ever gets a copy of your private key and they know (or can easily guess) the passphrase you used to encrypt it, then your key and all certs associated with it should be considered compromised. So, think about where you are storing the private key. How secure is that computer it's on? Is the passphrase written down somewhere? Is it easy to guess if someone has access to your other files? Hopefully it's not stored in plain text on the same box!

If your key is ever compromised, you have to revoke the certificates that were signed with it. Your CA should have a process for doing that and they shouldn't charge extra for it.

Generate a private key

  • openssl genrsa -out ssl.key 2048

Some considerations:

  • Use a passphrase? No. This would make it more secure, but then you'd have to enter it every time Apache is started or sent a SIGHUP.
  • How many bits? Some tutorials say 1024, but 2048 is pretty standard now, so use 2048. More bits means more CPU cycles needed for encryption, so I'm hesitant to use 4096 (my server is running on old hardware), lest it slow things down too much. However, I've read that encryption overhead really isn't that high, even on busy servers, so maybe it's no big deal to use 4096.

Generate a CSR

  • openssl req -new -key server.key -out server.csr -sha1

SHA1 is crackable now, so you need to use SHA256; see https://community.qualys.com/blogs/securitylabs/2014/09/09/sha1-deprecation-what-you-need-to-know

You'll be prompted to enter country, state/province, locality, organization name, organizational unit name—these can be blank or filled in as you wish (although I found that I had to enter country/state/locality). Then you enter the Common Name (CN), which should be the "main" domain name the cert is for. If it's a wildcard cert, the CN would be something like "*.example.com". Otherwise it needs to match the main domain name that people will be using to access the server. Some registrars might want you to use a FQDN ("something.example.com").

You'll also be prompted to enter an email address that will be in the cert; I suggest something that works but isn't too revealing, like root or hostmaster at your domain.

If prompted for a challenge password, this is a password that you create and give to the CA. They can then use it in order to verify you in future interactions with them. It's a way to protect against someone impersonating you when they talk to the issuer.

Optional company name is probably for if your company is requesting the cert on behalf of someone else. I just leave it blank.

Now you have a text file, server.csr, the contents of which you'll copy-paste or otherwise upload to the CA.

Get your cert from the CA

Turn off any ad or script blockers when accessing the CA's website.

If you're new there, you'll probably have to verify an email address (doesn't matter what it is, as long as you can get the code they send you) and paste a validation code into a form. They may also try to make your browser accept an SSL cert for authenticating you. Think of it as an extra-special cookie.

Once you're in, follow whatever procedures they have laid out. Probably they will want to validate your domain. This requires them sending a validation code to an email address at the domain in question (e.g. hostmaster@yourdomain.com), and then you tell them what code you received. After the domain is validated, you give them your CSR text.

I found that when working with one particular CA to get a non-wildcard cert, if I generated a CSR for a bare domain (example.org), the CA required that I enter a FQDN (something.example.org). The resulting cert contained something.example.org as the CN and example.org as a Subject Alternative Name (meaning, "also good for this domain"). It worked fine.

If everything goes well, the CA will give you your requested cert (e.g. ssl.crt), along with root and intermediate certs (maybe in one file). You will need to tell Apache where all of these files are. The CA probably has instructions on their site.

Configure Apache HTTPD

Put the cert files wherever you want, just make sure that the folder and files are readable only by root.

Edit httpd.conf and uncomment the line that says something like

Include etc/apache24/extra/httpd-ssl.conf

Edit extra/httpd-ssl.conf and comment out the <VirtualHost: _default_:443>...</VirtualHost> and its contents (aside from what's already commented-out). Here's the general idea of what you should add instead:

Enable name-based virtual host configs:

NameVirtualHost *:443

Set up an alias for a desired access-log format. I want to use the standard "combined" format, with a couple of SSL-specific details appended:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x" combined_plus_ssl

For each of the domains named in the certificate, you need a virtual host entry. You are mainly duplicating your httpd-vhosts.conf entries, but for port 443, with SSL stuff added, and (probably) different log file locations and formats.

In HTTPS, the client first establishes an unencrypted connection to port 443 at the server's IP address. This is just in order to negotiate encryption. Once this is done, the actual HTTP request is decrypted and handled.

When using a non-SNI-capable browser, the initial, unencrypted connection does not have a hostname/domain (identifying the desired website) associated with it, so the first <VirtualHost> entry that matches the IP address and port 443 will be handling it, and the certificate defined in that entry must be the same as the one in the entry that will be handling the actual HTTP request. The HTTP-handling entry could be the same entry as the initial connection-handling entry, or it could be separate.

When the connection comes from an SNI-capable browser, then it will probably have a hostname/domain, so an SNI-capable server (like Apache 2.2.12 and up, built with OpenSSL 0.9.8j and up, which is standard since mid-2009) will simply use the <VirtualHost> entry with the corresponding ServerName for both the initial connection and the actual HTTP request.

Once the encrypted connection is established, the rest of the communication is ordinary HTTP requests that arrive encrypted. These are sent to port 443 at the same IP address, and are decrypted and handled like normal (but with these configs, not the ones for port 80). Each request should contain a Host: header to specify the hostname/domain. So the first <VirtualHost> entry does double-duty, handling the HTTP service for one of these domains:

# This one will be for any encrypted requests on *:443 with
# "Host: example.com:443" headers.
#
# By virtue of being first, this entry also applies to the initial connection on
# *:443 (for non-SNI clients), and encrypted requests on *:443 with a missing or
# unrecognized Host header.
#
VirtualHost *:443>
    ServerName example.com:443
    ServerAdmin root@example.com
    SSLEngine on
    SSLProtocol all -SSLv2 -SSLv3
    SSLCertificateKeyFile "/path/to/server.key"
    SSLCertificateFile "/path/to/ssl.crt"
    SSLCACertificateFile "/path/to/root.crt"
    Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
    DocumentRoot "/path/to/whatever"
    CustomLog "/path/to/whatever" combined_plus_ssl
    ErrorLog "/path/to/whatever"
    LogLevel notice
</VirtualHost>

SSLCACertificateFile is for the CA root cert. Some CAs issue intermediate certs in a file separate from the root cert. In that case, you'd have to refer to that intermediate cert file as SSLCertificateChainFile in your Apache config. But if the root and intermediate cert are in a single file, you just use SSLCACertificateFile by itself.

You're going to want LogLevel to be notice or higher, because there's a lot of noise in the SSL info-level messages.

Of course * can be replaced with a specific IP address, if you want.

The rest of the VirtualHost entries are only for the specific Host: headers. Make sure there's one for each name the cert is good for.

# This one will be for any encrypted requests on *:443 with
# "Host: foo.example.com:443" headers, and for the initial
# connection on *:443 by SNI-capable clients wanting foo.example.com.
#
# Don't forget to mirror any non-SSL, non-log changes here
# with the corresponding *:80 entry in httpd-vhosts.conf.
#
<VirtualHost *:443>
    ServerName foo.example.com:443
    ServerAdmin root@example.com
    SSLEngine on
    SSLProtocol all -SSLv2 -SSLv3
    SSLCertificateKeyFile "/path/to/server.key"
    SSLCertificateFile "/path/to/ssl.crt"
    SSLCACertificateFile "/path/to/root.crt"
    Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
    DocumentRoot "/path/to/whatever"
    CustomLog "/path/to/whatever" "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
    ErrorLog "/path/to/whatever"
    LogLevel notice
</VirtualHost>

Ref (non-SNI): https://wiki.apache.org/httpd/NameBasedSSLVHosts Ref (SNI): https://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI

It's a good idea to have entries for any other domains hosted on the same IPs. That is, every HTTP website should have some kind of HTTPS service as well. This has a couple of ramifications:

  • You will have to keep the :443 <VirtualHost> entries in sync with the :80 ones.
  • When people try to access the HTTPS versions of sites that the certificate isn't valid for, they'll get warnings in their browsers. If they choose to accept the certificate anyway, what do you want to do? In my opinion, the best thing to do is redirect to an HTTPS site that the certificate is good for, or if there's no such option, just redirect to the regular HTTP site. In either case, their initial request should still be handled with SSL:
# People might try to access our hosted domains via HTTPS (port 443)
# even if we don't have certs for those domains. They'll get the default
# cert (as per the first VirtualHost entry) and despite the warning
# in their browser, the user has the option of accepting it.
# We want to redirect them to the appropriate, probably non-SSL location.
#
<VirtualHost *:443>
    ServerName non-ssl-host.example.com:443
    ... the usual SSL stuff goes here ...
    DocumentRoot "whatever"
    Redirect / http://non-ssl-host.yourdomain.org/
    CustomLog "whatever" combined_plus_ssl
    ErrorLog "whatever"
    LogLevel notice
</VirtualHost>

See if it works

  • Visit your web sites with https URLs and see what happens.
  • Use a third-party SSL checker like SSLShopper's SSL Checker.
  • If you use Firefox or Chrome, install the HTTPS Everywhere extension, create a custom ruleset for it, then see if you get redirected to the https URL when you try to visit the http URL of your web site.

Something else to check for is mixed content. Ideally, an HTTPS-served page shouldn't reference any HTTP-served scripts, stylesheets, images, videos, etc.; browsers may warn about it. Replace any http: links in your HTML with relative links (for resources on the same site) or https: links (for resources that are verifiably available via HTTPS). For example, in MediaWiki's LocalSettings.php, I had to change $wgRightsUrl and $wgRightsIcon to use https: URLs. There may still be some external resources which are only available via HTTP, but if they're outside your control, there's nothing you can do about that.

Enable HSTS

HSTS is a lot like HTTPS Everywhere, but it comes standard in modern browsers. You enable HSTS on the server just by having it send a special header in its HTTPS responses. The header tells HSTS-capable browsers to only use HTTPS when accessing the site in the future. In the main configuation, you need

  • LoadModule headers_module modules/mod_headers.so

On my system, this was already enabled. Then, in the <VirtualHost> section for each HTTPS site (not regular HTTP!), you need

  • Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"

Test it in your browser by disabling HTTPS Everywhere (if installed), then visit the HTTPS website, then try to visit the HTTP version of the site. The browser should change the URL back to use HTTPS automatically.

POODLE attack mitigation

The attack forces a downgrade to SSLv3, which is now too weak to be relied upon. You have to disable SSLv3. IE6 users will be locked out.

  • SSLProtocol all -SSLv2 -SSLv3

CRIME attack mitigation

This is an easy one. Just ensure TLS compression is not enabled. It normally isn't enabled, but just in case:

  • SSLCompression off

BEAST attack mitigation

  • requires combo of SSLProtocol and SSLCipherSuite
  • use TLS 1.1 or higher, or (for TLS 1.0) only use RC4 cipher
  • you can't specify "RC4 for TLS 1.0, but no RC4 for TLS 1.1+" in mod_ssl
  • TLS 1.1+ can still be downgraded to 1.0 by a MITM!
  • RC4 has vulnerabilities, too!
  • Apache 2.2 w/mod_ssl is normally built w/OpenSSL 0.9.x, supporting TLS 1.0 only!

But wait, read on...

Enable perfect forward secrecy

Cipher suites using Diffie-Hellman key exchange ("DH") provide forward secrecy. "Perfect" forward secrecy (PFS) is an enhanced version of this policy.

  • it ensures session keys can't be cracked if private key is compromised
  • it requires ephemeral Diffie-Hellman key exchange ("EDH" or "DHE"), optionally with Elliptic Curve cryptography ("ECDHE" or "EECDH") to reduce overhead
  • ECDHE requires Apache 2.3.3+! (it's OK to leave it listed in 2.2's config though)
  • browser support varies

The basic config of

  • SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5

gives me a pretty nice report with lots of green "Forward Secrecy" results on the Qualys SSL Labs analyzer.

This gets more complicated if you want to mitigate the BEAST attack. There are suggestions [2][3] for dealing with it through the use of SSLCipherSuite directives that prioritize RC4 if AES isn't available. However, this is not good for Apache 2.2, because you'll probably end up disabling forward secrecy for everyone.

Reference for SSLCipherSuite: here (click). It may help to know that on the command line, you can do openssl ciphers -v followed by the same parameters you give in the SSLCipherSuite directive, and it will tell you what ciphers match.

It's best to beef up your Diffie-Hellman setup by following the instructions at https://weakdh.org/sysadmin.html. In a nutshell:

  • cd /etc/ssl
  • openssl dhparam -out dhparams.pem 2048

After a nice long wait for that to finish, make Apache use the new params and a new order of cipher suites. In /usr/local/etc/apache24/extra/httpd-ssl.conf:

  • SSLOpenSSLConfCmd DHParameters "/etc/ssl/dhparams.pem"
  • SSLHonorCipherOrder on
  • SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

SMTP Authentication and STARTTLS support in Sendmail

FreeBSD comes with sendmail installed in the base system, with support for STARTTLS (the SMTP command that sets up encryption) disabled. You will get encryption support if you just tell sendmail where to find certificates.

To also do authentication—i.e. where authorized users log in to your server to have it deliver mail for them—you need to rebuild sendmail with support for the SASL libraries. Every time there is an update to the base system's sendmail, you'll have to do the rebuild in /usr/src, which can be a pain. Some administrators choose to install sendmail from the ports collection to make this easier, but that port is really mainly intended for helping upgrade sendmail installations on older systems.

Set up authentication

In order to set up authentication, rebuild sendmail with support for the SASL libraries. Just follow the instructions in the SMTP Authentication section of the FreeBSD Handbook.

Where the handbook refers to editing freebsd.mc or the local .mc, I made sure to use /etc/mail/`hostname`.mc.

The handbook also suggests increasing the log level from its default of 9, but doesn't say how. You do it by adding this to the .mc file:

dnl log level
define(`confLOG_LEVEL', `13')dnl
As mentioned previously, any time you update the OS with freebsd-update, you will probably overwrite your custom builds of system binaries. So for example, if you have built Sendmail with SASL2, it will be clobbered by freebsd-update, so you will have to rebuild it!

At this point, do the make install restart as directed, just to make sure nothing broke. sendmail should start up quietly. Maybe send yourself a test message and make sure you can still receive mail OK. Feel free to tail the mail log and see what it says.

The outcome here, if I understand correctly, is this:

  • SMTP clients (email programs) can now ask to interact with my server as a local user (with their login password), in order to use my server as a relay for their outbound mail. (Your ISP may not appreciate this; I know mine insists that people use the ISP's own relays exclusively.)

Previously, to allow relaying, I had set up each user's home IP address as a valid RELAY in /etc/mail/access. Obviously authentication is better. However...

I think the handbook's advice, as given, is rather dangerous, because it says to override the default authentication methods, which the documentation currently says are GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5. The handbook's advice omits KERBEROS_V4, which is no big deal, but then it also adds the LOGIN authentication method, which transmits the username and password in the clear (well, base64-encoded), which is a big deal if the connection isn't yet encrypted.

Regardless of whether you leave LOGIN (or PLAIN) in there, but especially if you do, I strongly suggest you also add this to the .mc file:

dnl SASL options:
dnl f = require forward secrecy
dnl p = require TLS before LOGIN or PLAIN auth permitted
dnl y = forbid anonymous auth mechanisms
define(`confAUTH_OPTIONS',`f,p,y')dnl

While you're in there, throw KERBEROS_V4 back in and change the comments to be more informative:

dnl authentication will be allowed via these mechanisms:
define(`confAUTH_MECHANISMS', `GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl

dnl relaying will be allowed for users who authenticated via these mechanisms:
TRUST_AUTH_MECH(`GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl

Set up encryption

Public key encryption via the STARTTLS command won't work until you tell sendmail where the private key and certificates are. So, in the .mc file add the following:

dnl certificate and private key paths for STARTTLS support
define(`confCACERT_PATH', `/etc/mail/certs')dnl
define(`confCACERT', `/etc/mail/certs/CAcert.pem')dnl
define(`confSERVER_CERT', `/etc/mail/certs/MYcert.pem')dnl
define(`confSERVER_KEY', `/etc/mail/certs/MYkey.pem')dnl
define(`confCLIENT_CERT', `/etc/mail/certs/MYcert.pem')dnl
define(`confCLIENT_KEY', `/etc/mail/certs/MYkey.pem')dnl

Adjust these paths as needed. For reference:

  • SERVER_CERT is a file containing only your cert (no intermediate or root certs), for receiving mail.
  • SERVER_KEY is a file containing the private key for SERVER_CERT, for receiving mail.
  • CLIENT_CERT is a file containing only your cert (no intermediate or root certs), for sending mail.
  • CLIENT_KEY is a file containing the private key for CLIENT_CERT, for receiving mail.
  • CACERT is a file containing the CA cert which signed the SERVER_CERT (so, the CA root cert, preceded by any intermediate certs). It can also contain root & intermediate certs for any other CAs you want the server to offer to clients during the TLS handshake. If the client has a cert signed by one of those CAs, client authentication will be attempted by the server; otherwise, only the server gets authenticated by the client.
  • CACERT_PATH is the directory where Sendmail can find more acceptable CA certs (named or symlinked by their hashes; see below).

SERVER_CERT and CLIENT_CERT can point to the same file, as can SERVER_KEY and CLIENT_KEY.

Make sure the referenced directory and files exist. They must be readable only by owner (root, probably), and symlinks are OK. Everything must be PEM format.

Now make install restart and tail the mail log, watching for errors. Also run the tests at checktls.com.

Outcomes:

  • SMTP clients (email programs and mail relays) that connect to my server anonymously in order to hand off mail for my users (or for other domains I relay to) can now request encryption and communicate securely.
  • My SMTP server, when connecting to a remote SMTP server in order to deliver mail from my users, can now request encryption and communicate securely.

Certificate limitations

I have read that not all certificates work for STARTTLS.

Apparently you can run openssl x509 -noout -purpose -in path_to_your_cert to see what "purposes" your cert is approved for. Here's the output for my AlphaSSL wildcard cert:

Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : Yes
SSL server CA : No
Netscape SSL server : Yes
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : No
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No

I suspect "SSL client : Yes" is crucial.

Client certificate verification

What good is encryption if the client is being impersonated by some Man-in-the-Middle (MITM) who is choosing his favorite cipher and sending you his public key? The way to defend against this is to verify the client. But you also have to figure out what to do with unverifiable clients.

Certificates for trusted clients or their CAs are required on the server. Unless you configured the server not to request a certificate from the client, it will ask for one, and it will tell the client "I'm prepared to accept a certificate signed with these CA root certificates..." The certs it will accept are the root certs and self-signed certs that are in the confCACERT file, plus those that you have symlinks for in the confCACERT_PATH directory. The client will then decide whether it wants to offer the server a cert at all.

The Sendmail Installation and Operation Guide says you can't have the server accepting too many root certs, because the TLS handshake may fail. But it doesn't say how many is too many; it just says only include the CA cert that signed your own certs, plus any others you trust. I take this to mean that I'm not supposed to include the whole the Mozilla root cert bundle, i.e. /usr/local/share/certs/ca-root-nss.crt, as installed by the security/ca_root_nss port (which is maybe already on the system, as it is needed by curl, SpamAssassin, gnupg, etc.).

To verify a client cert signed by a CA, you need a copy of the CA root certificate and any intermediate certificates to be on the system. As many certs as you want can be concatenated together in the confCACERT file, or they can be in separate files represented by symlinks, named for the cert's hash, in the confCACERT_PATH directory. If intermediate certificates are present, they can be in separate files, too, or they can have the higher-level certs, on up to the root, concatenated to them in one file; e.g. GoDaddy has a gd_bundle.crt file available for this purpose, with the contents of gd_intermediate.crt followed by the contents of gd-class2-root.crt; the hash will be for the first cert in the bundle (i.e., the lowest-level intermediate cert).

To verify a self-signed client cert, I believe you need a copy of the self-signed cert to be on the system; it is treated like a CA root cert. It can live in the file with the root certs or it can have a symlink in the confCACERT_PATH directory.

Here is how to generate the appropriate symlink (but replace both instances of cert.crt with the path to the appropriate file):

  • ln -s cert.crt `openssl x509 -noout -hash < cert.crt`.0

When your server receives email via an encrypted connection, you will see something like this in the Received: headers:

  • (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)

Here are the possible client certificate verification codes:

  • verify=OK means that the verification succeeded.
  • verify=NOT means that the server didn't ask for a cert, probably because it was configured not to.
  • verify=NO means that the server asked for a cert, but the client didn't provide one, or it didn't provide the intermediate and root certs along with the client cert. Maybe the client isn't configured to send the whole bundle, or it doesn't have a client cert to provide, or maybe the client didn't like the list of acceptable CA root certs the server offered. This code is not cause for concern unless you were expecting to be able to verify that client because you have the necessary certs installed.
  • verify=FAIL means that the server asked for a cert, and the client provided one that couldn't be verified. Maybe it's expired, or the server doesn't have the necessary root and intermediate certs, or the certs it has don't have signatures that match those presented, or one of the certs presented is listed in the CRL file (if any).
  • Other codes are NONE (no STARTTLS command issued), SOFTWARE (TLS handshake failure), PROTOCOL (SMTP error), and TEMP (temporary, unspecified error).

By default, Sendmail doesn't care what the code is; it'll proceed with the transaction anyway, if possible. Depending on your needs, you can configure Sendmail to react to these codes.

Even if there is no verification, the transaction is still encrypted; there is just no certainty of the identity of the connecting host.

The biggest caveat is on a public MX host, you're required (by RFC 3207) to accept relaying through unencrypted connections, so you can't really do much verification of clients.

A client may present you with valid certs, but if you don't have the necessary certs installed to verify them, that's your fault, not the client's. And you can't say that verify=FAIL is reason to refuse delivery, but then accept any other non-verify=OK codes. I mean, what's to stop the client from just trying again and deliberately triggering one of the other codes? e.g. it could not use STARTTLS at all, or not send a cert.

So really there's only a few choices (pick one):

  • Don't attempt verification at all.
  • Attempt verification of a handful of trusted hosts & root CAs, but only for informational purposes.
  • Require encrypted connections, attempt verification of a handful of trusted hosts & root CAs, and disallow relaying for those that don't get verify=OK. This is not an option for public servers.

Sendmail encryption related documentation of note

Official Sendmail docs:

  • /usr/share/sendmail/cf/README - massive doc explaining .mc & .cf files and all the options therein. Current copy online at MIT.
  • /usr/share/sendmail/cf/cf/knecht.mc - Eric Allman's .mc file with many interesting things in it
  • (this is where it ends up on FreeBSD:) /usr/src/contrib/sendmail/doc/op/op.me - troff source for the Sendmail Installation and Configuration Guide. On FreeBSD there's a Makefile in that folder, so you can cd /usr/src/contrib/sendmail/doc/op/ && make op.ps op.txt op.pdf to generate PostScript, ASCII (ugly), and PDF copies. It's indispensable! Online copy (HTML): https://www.sendmail.org/~ca/email/doc8.12/op.html

FreeBSD-specific:

  • /etc/mail/README - Mainly just explains how to work around an issue with getting it to work with jails.
  • SMTP Authentication - outdated chapter of the FreeBSD Handbook. The instructions for rebuilding Sendmail are good for enabling STARTTLS and AUTH, at least, but these docs need work.

Useful guides:

Cyrus SASL-related:

TLS/SSL and certificates:

Anti-spam measures

Enable a caching DNS server

FreeBSD 9 and lower comes with BIND preconfigured to be a caching DNS server listening on 127.0.0.1, but it is disabled by default. If you enable it, you'll reduce traffic to/from other DNS servers. You can also configure it to bypass your ISP's DNS server, if that's what you normally use, in order to use certain RBL services to combat spam (see next section).

On FreeBSD 10 and up, the DNS server is called Unbound, and by default it is configured as a local caching resolver. See https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-dns.html for how to enable it.

On FreeBSD 9 and lower, with BIND:

  • Add named_enable="YES" to /etc/rc.conf
  • Uncomment the forwarders section of /etc/named/named.conf and put your ISP's nameserver addresses in it.
  • In /etc/resolv.conf, replace your ISP's nameserver addresses with 127.0.0.1 (or—and I haven't tested this—if you use DHCP, add prepend domain-name-servers 127.0.0.1; to the /etc/dhclient.conf section for your network interface; see the dhclient.conf man page).
  • service named onestart

Test it:

  • nslookup freebsd.org

The first line of output should say Server: 127.0.0.1 and the lookup should succeed.

At this point you are just forwarding; anytime you look up a host not yet in the cache, you are asking your ISP's nameserver to request it for you. It might pull it from its own cache.

Support RBLs

You are probably combatting spam by using RBLs, which rely on DNS queries to find out if a given IP is a suspected spammer.

Some RBL services block queries from the major ISPs, because they generate too much traffic. URIBL is an example of such a service.

To deal with this, after enabling the caching & forwarding DNS service as described above, you now need to disable forwarding for just the RBL domains. Then your server will query those domains' DNS servers directly. It will work if you just add something like this to named.conf (then restart named):

/* Let RBLs see queries from me, rather than my ISP, by disabling forwarding for them: */

// RBLs that are disabled but mentioned in my sendmail config
zone "blackholes.mail-abuse.org" { type forward; forward first; forwarders {}; };

// RBLs that are enabled in my sendmail config
zone "bl.score.senderscore.com" { type forward; forward first; forwarders {}; };
zone "zen.spamhaus.org" { type forward; forward first; forwarders {}; };

// RBLs that are probably enabled in SpamAssassin
zone "multi.uribl.com" { type forward; forward first; forwarders {}; };
zone "dnsbl.sorbs.net" { type forward; forward first; forwarders {}; };
zone "combined.njabl.org" { type forward; forward first; forwarders {}; };
zone "activationcode.r.mail-abuse.com" { type forward; forward first; forwarders {}; };
zone "nonconfirm.mail-abuse.com" { type forward; forward first; forwarders {}; };
zone "iadb.isipp.com" { type forward; forward first; forwarders {}; };
zone "bl.spamcop.net" { type forward; forward first; forwarders {}; };
zone "fulldom.rfc-ignorant.org" { type forward; forward first; forwarders {}; };
zone "list.dnswl.org" { type forward; forward first; forwarders {}; };

Secondary and tertiary MX records

To have a place for your inbound mail to queue when your host is down, it's common to set up a secondary MX that stores-and-forwards. The downside is that it probably attracts a lot of spam which doesn't get caught because the secondary MX accepts all mail for your domain, and your host, when it comes back online, will accept all mail from that secondary.

One way to partially work around this problem is to make your primary MX host also be a tertiary MX. Some spammers will favor the tertiary, but real mailers will try the secondary first.

If the spammers get wise, you can try using a different hostname for the tertiary MX, so long as its A record points to the same IP.

Spamassassin

It's tempting to run every piece of incoming mail through Spamassassin, but you don't want to block messages that "look spammy" such as bounces and mailing list traffic (especially the spamassassin users' mailing list). I haven't figured out how to do it right, so I am only running Spamassassin as a user, via procmail, and my .procmailrc is not running administrative messages (including bounces) and mailing list traffic through Spamassassin.

Enable DCC

DCC will score any bulk mail higher. This means legit mailing list posts will also be scored higher, so using it means you have to be vigilant about whitelisting or avoiding scanning mailing list traffic.

To enable DCC checking, just uncomment the appropriate line in /usr/local/etc/mail/spamassassin/v310.pre.

The feature requires allowing UDP traffic in & out on port 6277. See and http://www.rhyolite.com/dcc/FAQ.html#firewall-ports2. I didn't need to do anything special to enable this with my particular firewall configuration, but if I did, I would probably put an ipfw allow rule in /etc/rc.local.

Enable SPF...or not

SPF is for catching forged email. See http://www.akadia.com/services/spf.html. The idea is that email from a user at a particular domain will get a "pass" from the SPF checker if the mail comes from an IP address that the domain owner has approved via a special entry in their DNS records. Otherwise it gets a "fail" or "softfail" or whatever.

Getting a "pass" is worthless (Spamassassin score adjustment of zero) because so many spammers use custom domains that they control and set SPF records for. A "fail" is worth about 0.9. It's great for catching a certain kind of spam, as long as the domain owner keeps their SPF records updated and legitimate email from that domain always goes direct from the approved servers to the recipient's servers.

I've read several anti-SPF rants that seem to say there are other reasons SPF is "harmful," but they don't really explain the problems very well, and they don't seem to be based on empirical evidence of "harm."

Honestly, I very rarely get any SPF passes and even fewer fails. It's just wasting time to enable SPF checking in Spamassassin, so after enabling it for a while (in init.pre), I turned it off.

I look at SPF more as just protection for legitimate domains. Non-spam domains with SPF info in their DNS records are far less likely to be forged by spammers. So for my domain, I set up a TXT record that says "v=spf1 a mx -all". Now spammers are less likely to use my domain in the envelope sender address.

v320.pre

There are a bunch of plugins that come with Spamassassin. Many are enabled by default via loadplugin lines in the various *.pre files. I enabled a couple more by uncommenting some more loadplugin lines in /usr/local/etc/mail/spamassassin/v320.pre.

This one is what allows the shortcircuit rules to work:

loadplugin Mail::SpamAssassin::Plugin::Shortcircuit

...You also have to create shortcircuit.cf; see below.

This one is an optimization to compile rules to native code:

loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody

shortcircuit.cf

Some basic rules for the Shortcircuit plugin come with SpamAssassin. These rules can be extended by using the sample Shortcircuiting Ruleset in the SA wiki.

spamc.conf

I feel it's a good idea to avoid scanning extremely large messages. Yes, this gives spammers a back door, but scanning incoming email shouldn't be something that cripples the server. If I had a faster box with more RAM, I would set this limit much higher.

# max message size for scanning = 600k
-s 600000

local.cf

I want suspected spam to be delivered to users as regular messages, not as attachments to a Spamassassin report:

report_safe 0

If a message matches the whitelists, just deliver it without doing a full scan:

shortcircuit USER_IN_WHITELIST       on
shortcircuit USER_IN_DEF_WHITELIST   on
shortcircuit USER_IN_ALL_SPAM_TO     on
shortcircuit SUBJECT_IN_WHITELIST    on

Likewise, if a message matches the blacklists, just call it spam:

shortcircuit USER_IN_BLACKLIST       on
shortcircuit USER_IN_BLACKLIST_TO    on
shortcircuit SUBJECT_IN_BLACKLIST    on

I've never seen BAYES_00 or BAYES_99 mail that was misclassified, so avoid a full scan on that as well:

shortcircuit BAYES_99                spam
shortcircuit BAYES_00                ham

My users get to have their own ~/.spamassassin/user_prefs files:

allow_user_rules 1

My users probably aren't sending out spam to other users on my system:

# probably not spam if it originates here (default score 0)
score NO_RELAYS 0 -5 0 -5

Custom rule: among my users (mainly me), I believe a message with a List-Id header is slightly less likely to be spam:

header  FROM_MAILING_LIST       exists:List-Id
score   FROM_MAILING_LIST       -0.1

Custom rule: a message purporting to be from a mailing list run by my former employer is much less likely to be spam:

header  FOURTHOUGHT_LIST        List-Id =~ /<[^.]+\.[^.]+\.fourthought\.com>/
score   FOURTHOUGHT_LIST        -5.0

Custom rule: a message from an IP resolving to anything.ebay.com can be whitelisted:

# maybe not ideal, but at one point I missed some legit eBay mail
whitelist_from_rcvd *.ebay.com ebay.com

I realize these custom rules could easily let spam through, but I was desperate to avoid false positives, which I was getting when using the AWL (Auto-WhiteList plugin), which despite copious training was making a lot of ham score as spam. AWL is no longer enabled in SpamAssassin by default, and I sure as hell am not using it ever again. So I probably don't need these rules anymore. I leave them in, though, because they remind me how to set up this kind of thing.

Before I enabled a caching, non-forwarding DNS server, the URIBL rules weren't working, so I had to disable the lookups by setting the URIBL scores to zero. Since I set up the non-forwarding DNS server, my URIBL queries are coming from my own IP rather than my ISP's DNS servers, so it works properly. Therefore, I've got this commented out now; it's just here for future reference:

#score URIBL_BLACK 0
#score URIBL_RED 0
#score URIBL_GREY 0
#score URIBL_BLOCKED 0

Bounces generated by my own MTA for mail that originates on my network will get scored lower (i.e., more likely to be ham) due to the NO_RELAYS rule. Without additional configuration, though, any bounces generated by remote MTAs, regardless of whether it's for mail originating on my network or originating elsewhere, will not be recognized or handled differently than any other inbound mail. Remotely generated bounces for mail originating elsewhere is called backscatter and is not actually spam, although it often does contain spam or viruses, and is generally unwanted.

In order to distinguish bounces from regular mail, and to distinguish the bounces for mail originating here from backscatter (not really score it differently, by default), I need to activate the VBounce plugin. This plugin is already enabled in v320.pre, but it doesn't actually do anything until it is told what the valid relays are for local outbound mail. So here I tell it what to look for in the Received headers to know that it's a bounce for mail that originated from my network:

whitelist_bounce_relays chilled.skew.org

Bounces should then hit the ANY_BOUNCE_MESSAGE rule plus one of these:

  • BOUNCE_MESSAGE = MTA bounce message
  • CHALLENGE_RESPONSE = Challenge-Response message for mail you sent
  • CRBOUNCE_MESSAGE = Challenge-Response bounce message
  • VBOUNCE_MESSAGE = Virus-scanner bounce message

You can customize your scoring for these if you want, or in your .procmailrc you can specially handle scanned mail with these tags appearing in the X-Spam-Status header. However, I thought I shouldn't be sending obvious bounces to Spamassassin at all...hmm.

Personal user_prefs

After saving and separating my ham and spam for a couple of months, then looking at the scores, I'm pretty confident that ham addressed to me is very unlikely to score much higher than 3, so I lowered the spam threshold from 5 to 4:

require_hits 4

Similarly, I'm finding ham addressed to me is very unlikely to be in the BAYES_50_BODY to BAYES_99_BODY range, so I bump those scores up a bit:

# defaults for the following are 0.001, 1.0, 2.0, 3.0, 3.5
score BAYES_50_BODY 2.0
score BAYES_60_BODY 2.5
score BAYES_80_BODY 3.0
score BAYES_95_BODY 4.0
score BAYES_99_BODY 4.5

I thought the default score for a Spamcop hit was pretty low, so I bumped it up:

# default for the following is 1.3, as of January 2014
score RCVD_IN_BL_SPAMCOP_NET 3.0

(I already have my MTA checking Spamcop, but it only looks at the IP connecting to me, so it lets through spam that originated at a Spamcop-flagged IP but that was relayed through a non-flagged intermediary.)

Remember the down-scoring I do for mailing lists in the site config? Well, if that mailing list traffic is addressed to me, I want to score it even lower:

score   FOURTHOUGHT_LIST        -100.0
score   FROM_MAILING_LIST       -1.0

I also have a bunch of whitelist_from entries for my personal contacts.

Finally, I want a Spamassassin report added to the headers of every message I get, so I know why it scored as it did:

add_header  all Report _REPORT_

Install sa-utils

sa-utils is an undocumented port that installs the script /usr/local/etc/periodic/daily/sa-utils. The purpose of the script is to run sa-update and restart spamd every day, so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.

  • Install the mail/sa-utils port. when prompted, enable sa-compile support.
  • Put whatever flags sa-update needs in /etc/periodic.conf. For me, it's:
    daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org" and, after I've confirmed it's working OK, daily_sa_quiet="yes".
  • Assuming you enabled sa-compile support, uncomment this line in /usr/local/etc/mail/spamassassin/v320.pre:
    loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody

That's it.

Now, if you don't want to install sa-utils, but you are running SpamAssassin, you'll want a cron job that updates SpamAssassin rules and restarts spamd every day. Here's the basic version I used to use for the core rules:

  • /usr/local/bin/sa-update --nogpg --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart

After using that for years, I switched to a version that incorporates SpamAssassin developer Justin Mason's "sought.cf" ruleset. First, outside of crontab, add the channels' GPG keys to sa-update's keyring:

The caveat here is that the keys will eventually expire. For example, the one for sought.rules.yerp.org expires on 2017-08-09. At that point, you'll have to notice that the updates stopped working, and get a new key. To see the keys on sa-update's keyring, you can do this:

  • gpg --homedir /usr/local/etc/mail/spamassassin/sa-update-keys --list-key

So here's what goes in the crontab:

  • env PATH=/usr/bin:/bin:/usr/local/bin /usr/local/bin/sa-update -v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart

The reason I override the cron environment's default path of /usr/bin:/bin is because sa-update needs to run the GPG tools in /usr/local/bin.

However, like I said, instead of a cron job, I'm using sa-utils now.

Update SpamAssassin and related

The SpamAssassin port is now mail/spamassassin, not mail/p5-Mail-SpamAssassin. See UPDATING.

For the options I've chosen, this will update various Perl modules, gettext, libiconv, curl, libssh2, ca_root_nss, gnupg1.

  • portmaster --packages mail/p5-Mail-SpamAssassin
  • portmaster --packages mail/spamssassin

The port is rather clumsy in that it deletes /usr/local/etc/mail/spamassassin/sa-update-keys, so after the update, I have to re-import the GPG key for the "sought" ruleset.

  • fetch http://yerp.org/rules/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY

I asked about this on the mailing list, and cc'd the port maintainer, but no word yet.

If everything has installed correctly, restart sa-spamd when it's done. It probably stopped running during the install.

As of 3.4.0, if your system doesn't support IPv6, spamc will complain that it can't connect to spamd on ::1. To work aroudn this, you need to add the new -4 flag (to force/prefer IPv4) in two places:

  • /usr/local/etc/mail/spamassassin/spamc.conf
  • spamd_flags in /etc/rc.conf