User:Mjb/FreeBSD

This is just some random notes relating to FreeBSD system administration, mainly for my own benefit. Any questions/comments, use the Discussion page.

How much disk space to allocate?
Here's what I did last time (only one drive and one regular user):
 * = 500 MB (actual use is ~340 MB)
 * = 500 MB (actual use is near zero for me)
 * = 1.5 GB (actual use is ~790 MB for me)
 * = the rest (68 GB in my case, actual use is under 20 GB)

See what version of the OS is actually running
The standard  method actually doesn't work, because it just shows you what the OS/version/branch/patch level was when the kernel was compiled. The current info must be obtained some other way. If your OS source code is current, then this should work: Example output:

TYPE="FreeBSD" REVISION="8.3" BRANCH="RELEASE-p7"

Upgrade to a new patch level
The patch level (" " in the example above) correlates with security patches that were released as replacement binaries for the OS.

Of course it's possible you applied patches and rebuilt some binaries yourself, according to instructions in the security advisories you get by email (you did sign up for them, right?) ... in which case the patch level is not really accurate.

Regardless, these binary patches are only available for OS versions that were distributed as binaries, and that are still "supported", i.e. not more than 2 years old. I think this means pretty much just the latest  branches. ( isn't distributed in binary form and they don't worry about security at all for  .) Therefore, you may first have to do a minor version update (see next section) or new patches won't even be available for your system.

First, get the patches (maybe unset the  environment variable first to reduce clutter):



It'll download them to a temporary location and tell you what will be changed. If you have the OS source code installed in, source patches will be included in the update as well.

Now, install them:



Whether a reboot is needed depends on what was updated. You have to decide that yourself. Obviously anything kernel-related should make you want to do a reboot. If you don't do a reboot, but system daemons were updated, you'll need to restart those.

Upgrade to a new minor version of the OS
Reference: FreeBSD Update section of the FreeBSD Handbook

The following info is based on my upgrade from 8.1-RELEASE to 8.3-RELEASE, and from 8.3-RELEASE to 8.4-RELEASE (assumes generic kernel):

Prepare the environment
I normally have "-v" in my GZIP environment variable, and this really clutters the output of, so unset it:

Get new files


Takes several hours.

Merge files
Most merges will happen automatically, but some un-mergeable files like /etc/passwd will be reported, and you need to answer 'y' and merge them manually...but you don't get a nice merge interface, you just get dumped into an empty text editor! What you are expected to do here is create a merged file. Be very careful!

The goal is to compare and merge each old file from the directory tree rooted at  (copied from the live system) with the corresponding new file in  XXX, where XXX is the new FreeBSD version you're upgrading to (e.g.  ). You need to put each merged file into the same relative location under  , which is where the empty text editor will be saving to.

In my upgrade to 8.3-RELEASE, I just elected to go into the editor (you have no choice, really), loaded the old file, and saved it as-is. I didn't bother merging in the new one! Not ideal, but the least amount of hassle, right?

In my upgrade to 8.4-RELEASE, I tried a new approach: merge the files in a separate window, pre-populating the  folder, so that when the editor is opened, it's not empty, but rather has the merged file in it. Then I can just give it a once-over and save the result.

To accomplish this, in a separate terminal, as root, it would be nice to be able to run mergemaster. So I tried to do it like this:



However, it didn't work. I have asked about it on the freebsd-questions mailing list. Here is another, cruder method I tried, which did work:



The downside of this method is that it assumes you want to do an interactive merge (sdiff) of every file, whereas sometimes you are really going to want to save time and just choose to use the old or new file without merging; mergemaster would give you that ability.

Regardless of how you do your merge, once you've saved all the files in the editor, you'll be prompted to approve a diff for each one. If you answer "n" to any of these prompts, it will abort the entire upgrade and you will have to start over! So hopefully the merges are all OK, and you can continue.

However, among the changes you're asked to approve may be unspecified differences in  and , the binary files that contain your password database. You have no choice but to answer "y", but for God's sake, rebuild those files before rebooting! (see below).

Review changes
freebsd-update now presents you with lists of all the files that will be deleted, all the files that will be added, and all the files that will be modified.

Pay special attention to the changes in /etc.

After showing you the lists, that's it, nothing happens. The changes are ready to be made, but nothing has actually happened yet.

Install the new files
You are about to overwrite your real system files. I suggest making a backup of /etc first:

Cross your fingers:

Rebuild soon-to-be-clobbered databases
Now, unless you got mergemaster to work, you probably have to do the things that mergemaster normally would do for you.

''It seems things don't get replaced until after reboot. This may be a real problem!''

If  or   were changed or if   or (most importantly, I think)   changed (e.g., as in 8.4-RELEASE, got set to new defaults), then a   run will be necessary to regenerate the .db files, and you want to do this before your shutdown or you'll never get to log back in.

Normally you would do this: This will use  as the source file, and the   means generate a new   from it, in addition the the .db files.

However, the files in  are, at this stage, untouched. The new versions are sitting gzipped in, where XXXXX is a random ID; look at the directory creation date to figure out which one is current, if there's more than one.

So I think what you need to do is something like this, to inspect the new files:

total 10 6 -rw-r--r-- 1 root  wheel   4.0k Jun 25 00:48 master.passwd 4 -rw-r--r-- 1 root  wheel   3.2k Jun 25 00:49 passwd 0 -rw-r--r-- 1 root  wheel     0B Jun 25 00:49 pwd.db 0 -rw-r--r--  1 root  wheel     0B Jun 25 00:49 spwd.db

Obviously pwd.db and spwd.db are crap and we'd be in trouble if we installed those empty files!

If  looks OK, then try generating a new passwd file and pair of .db files:

total 138 6 -rw--- 1 root  wheel   4.0k Jun 25 00:48 master.passwd 4 -rw-r--r-- 1 root  wheel   3.2k Jun 25 00:53 passwd 68 -rw-r--r-- 1 root  wheel    68k Jun 25 00:53 pwd.db 60 -rw---  1 root  wheel    60k Jun 25 00:53 spwd.db

Quite a bit better. As you can see, master.passwd was just moved over, and the other three files were generated. Now to replace them:

And finally, clean up:

You'll have to go through a similar process if you use sendmail and you merged in any changes to  or   files. Ordinarily, the most thorough way is this:

But as before, the files haven't been installed yet!

Likewise, changes to  require rebuilding a database:
 * (see the man page for exact syntax)

Same for :
 * (see the man page for exact syntax)

There's a bug filed about this, but only for the master.passwd; it doesn't take into account this latest development where .db files are clobbered: http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/165954

Reboot and continue
OK, now reboot to try out the new kernel:
 * (again, this assumes you want the generic kernel)

Hope & pray it comes back up. If it does, do this again to get world installed:



This worked for me, for the upgrade to 8.3-RELEASE.

For the 8.4 upgrade, after this stage, it said: Completing this upgrade requires removing old shared object files. Please rebuild all installed 3rd party software (e.g., programs installed from the ports tree) and then run "/usr/sbin/freebsd-update install" again to finish installing updates.

Worry about that in a minute. First, realize that at this point,  has been modified, so it's a good idea to make sure you like the look of the new files, especially these:
 * (if changed, you need to run the appropriate  command in   ...perhaps  )
 * (if changed, you need to run  to rebuild  )
 * (if changed, freebsd-update should've run  to rebuild  )
 * (if changed, you need to run  to rebuild  )
 * (if changed, freebsd-update should've run  to rebuild  )

If anything's amiss, remember you made a backup in.

OK, now you can follow the directions below to update your ports tree and rebuild everything(!). Personally I don't like doing this because things tend to go wrong if you don't do it piecemeal. The downside is that some things will be left un-updated. But you can deal with that; read on...

Check for cruft
After the upgrade, you might want to see if anything out-of-date got left behind: If there's anything, you can run  to get rid of it; it will ask you about each file, normally. Ref: http://www.freebsd.org/doc/handbook/make-delete-old.html

There are a couple of options for checking the installed shared libraries:
 * If you install the  port, you can run   to check for missing libraries. It even tells you which ports are affected.
 * If you install the  port (note: requires Ruby), you can run   to check for missing libraries, check for unused libraries, and see exactly which binaries use each library. To figure out which port installed the file needing the library, you need to run.

Sample output of : gamin-0.1.10_4: /usr/local/libexec/gam_server misses libpcre.so.0 gio-fam-backend-2.28.8_1: /usr/local/lib/gio/modules/libgiofam.so misses libpcre.so.1

Rebuilding these two ports should be sufficient to get them linked to the current libpcre library. (Double-checking  shows that there's a   now).

Why did I have these ports installed? tells me gamin is required by gio-fam-backend, and  reveals that gio-fam-backend isn't required by anything that I currently have installed. This is a weird port, though, and it is not something you want to deinstall. It is FreeBSD-specific, and is kind of a companion to the glib port. (Though apparently they decommissioned it - see the 20130731 entry in UPDATING). reveals what's using glib: ImageMagick & MediaWiki.

Anyway,  takes care of the problem. Now when I run  and   (the new versions), there are no problems. The question now is whether I need to update ImageMagick. The lack of problems reported by  suggests the answer is no.

Reboot to restart daemons
After upgrading from 8.3-RELEASE to 8.4-RELEASE,  started accumulating error messages from sshd, every time someone tried to log in: error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key

Indeed, that key file didn't exist until after another reboot, which didn't happen until a mysterious, probably unrelated crash a month after the upgrade.

Web searches suggest that most people running into this problem aren't able to log in at all until they run a special  command to create the missing files, but I was having no such trouble.

I think that for me, the only problem was that after finishing the OS upgrade, sshd needed to actually be restarted. This makes me think that maybe it's a good idea to restart all the daemons as the penultimate step in upgrading the OS. To do that, you could run, but it might be easier to just reboot.

Ports installation & upgrade
Here's some general info about this topic.

Get a quick list of installed ports

 * with pkgng,
 * without pkgng,

Portmaster flags
Some of the most useful flags for portmaster:
 * will make it delete old distfiles after each port is installed, rather than asking you about it. ( would make it keep them.)
 * will prevent rebuilding/reinstalling ports that don't need it. But for some reason, you have to specify more than one port on the command-line for this to work.
 * depends on what else you are doing. Usually it means do a dry run. But in conjunction with,  ,  ,  ,  , or  , it means "answer no to all questions."
 * will make it use a package (major timesaver) for both the port if the latest package isn't older than the version in the ports collection. Otherwise, it falls back on building the port.
 * will make it try to use packages for build dependencies... I haven't figured this one out yet. It seems to not be necessary?

Here's an example (to update Perl modules, and Perl if needed):

Environment prep
If you have set your BZIP2 environment variable to include, like I have, and you have portaudit installed, then you will probably find that every time you do anything with ports or packages, you get a bunch of useless lines that say  , and FreeBSD's   misinterprets this as problems needing to be fixed. I reported this bug to the freebsd-ports mailing list, but I doubt it will get fixed unless I submit a patch, myself.

Update portmaster
Probably a good idea before doing anything else with portmaster.

Since  needs multiple packages to be specified, we can't use it here. Thus, if there's nothing to update, you will end up reinstalling the same version you already had.

Delete cached options from previous builds of stale ports
This just does some cleanup of /var/db/ports, which is where the options you chose in the 'make config' step of port building are stored. The options for ports that are currently properly installed will be left alone.

Update ports collection
The ports collection is a folder tree containing Makefiles and patches for 3rd-party software. Anytime you want to add or update 3rd-party software, first make sure the ports collection is up-to-date. Reference:

First time using portsnap or just want a fresh tree? Download the current ports tree to a temporary location (fetch), then install it in /usr/ports, replacing whatever was there before (extract):

Not the first time? Download updates to a temporary location (fetch), then apply them to the existing ports tree (update), deleting any modified or added files:

Now go look at.

A port has moved
The Handbook doesn't cover this, but sometimes the ports collection folder for a port that you've installed will get moved.

These moves are listed in, which is read by portmaster. So, although you could look at that file beforehand, you probably won't find out about a move until you run, or when you try to update your installed port.

For example, there was once a www/mediawiki meta-port, which pointed to the actual port for the latest stable version. I had used it to install mediawiki119. When I went to update it with, I got the following error: ===>>> The www/mediawiki port moved to www/mediawiki119 ===>>> Reason: Rename mediawiki to mediawiki119

The first place to look when you see this message is. Often, there will be a note about it there, with instructions. In this case, though, there wasn't, so I asked about it on freebsd-ports and also on freebsd-doc. I was told that UPDATING will only have unusual things in it, and this particular situation didn't qualify, because the version hadn't actually changed.

I don't think there's a way to just update the list of installed packages so that it will know about the move. You have to want to update the port, and then use portmaster's  flag to say which new port you want to replace the old one with.

So, for an ordinary move, the answer is:

For example, I could have updated without changing the version:

But since there was a newer version available, I decided to update to it:

lzma library errors
This probably won't come up again, but maybe it will help someone else. After updating to 8.4-RELEASE, I was trying to rebuild the PHP port (as part of the MediaWiki upgrade), but it failed early in the process with this message:



Looking at that config.log file, I saw more detail:

On a hunch, I decided to see what would happen if I tried to restart Apache:

When Googling for answers, I found some mention that ports needing the lzma port now need to use the xz port. Something doesn't sound right about that, though, because the xz port is deprecated as well.

It turns out that at some point, the xz port had been installed, needed by some other port. This resulted in some "lzma" libs being placed in  a very long time ago. Better lzma libs later became part of the base system in. Since the old libs were still sitting in, they were taking precedence when other ports needed them. This eventually prevented the PHP port from building, due to its reliance on libxml2, which in turn relies on liblzma, which needs to be up-to-date.

Simply moving the outdated libs out of  took care of the problem. Specifically, it was. Really, though, the solution is to  (or whatever version you have).

more lzma library errors
While attempting to upgrade all of my installed ports on another occasion in late 2013, the graphics/gd port failed to build because libtool was looking for the nonexistent. A 2011 discussion about it suggested the fix might be as easy as deleting and reinstalling ImageMagick:

This just led to the same kind of failure when the build tried to link in ImageMagick's tiff coder. So I tried rebuilding the underlying lib first:

Then I went back to the ImageMagick build:

That got me past the tiff coder error, so I continued:

That worked as well.

ImageMagick's enormous set of dependencies and lengthy build process have been problematic for me in the past. I'd rather exclude it from any port upgrades, but I'm not sure it's possible or wise to do so.

sa-utils
sa-utils is an undocumented port that installs the script. The purpose of the script is to run sa-update and restart spamd every day, so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.
 * Install the mail/sa-utils port. when prompted, enable sa-compile support.
 * Put whatever flags sa-update needs in . For me, it's:  and, after I've confirmed it's working OK,.
 * Assuming you enabled sa-compile support, uncomment this line in :

That's it.

Now, if you don't want to install sa-utils, but you are running SpamAssassin, you'll want a cron job that updates SpamAssassin rules and restarts spamd every day. Here's the basic version I used to use for the core rules:

After using that for years, I switched to a version that incorporates SpamAssassin developer Justin Mason's "sought.cf" ruleset. First, outside of crontab, add the channels' GPG keys to sa-update's keyring:

The caveat here is that the keys will eventually expire. For example, the one for sought.rules.yerp.org expires on 2017-08-09. At that point, you'll have to notice that the updates stopped working, and get a new key. To see the keys on sa-update's keyring, you can do this:

So here's what goes in the crontab: The reason I override the cron environment's default path of  is because sa-update needs to run the GPG tools in.

However, like I said, instead of a cron job, I'm using sa-utils now.

tt-rss
The www/tt-rss port is Tiny Tiny RSS, a web-based feed reader I'm now using instead of Google Reader.




 * edit :
 * DB_USER needs to be  (I didn't bother creating a special user...)
 * DB_NAME needs to be
 * DB_PASS needs to be whatever's appropriate for DB_USER
 * DB_PORT needs to be
 * SELF_URL_PATH needs to be whatever is appropriate
 * FEED_CRYPT_KEY needs to be 24 random characters
 * REG_NOTIFY_ADDRESS needs to be a real email address
 * SMTP_FROM_ADDRESS needs to at least have your real domain

Startup failed Tiny Tiny RSS was unable to start properly. This usually means a misconfiguration or an incomplete upgrade. Please fix errors indicated by the following messages:
 * visit http://yourdomain/tt-rss/

FEED_CRYPT_KEY requires mcrypt functions which are not found.

The solution, after making sure mcrypt isn't mentioned in :
 * visit http://yourdomain/tt-rss/ and you should get a login screen. u: admin, p: password.
 * Actions > Preferences > Users. Select checkbox next to admin, choose Edit. Enter new password in authentication box.
 * visit http://yourdomain/tt-rss/ and you should get a login screen. u: admin, p: password.
 * Actions > Preferences > Users. Select checkbox next to admin, choose Edit. Enter new password in authentication box.

The password is accepted, but subsequent accesses to all but the main Preferences page result in. There's nothing in the ttrss_error_log table in the database. Apache error log shows a few weird things, but nothing directly related: File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php

Logging in again seems to take care of it, unless I change the password again. This only affects the admin user.

Create a new user, and login as that user. Subscribe to some feeds. Feeds won't update at all unless you double-click on their names, one by one.

Now the update daemon:


 * In, add

Feeds should now update automatically, as per the interval defined in Actions > Preferences > Default feed update interval. Minimum value for this, though, is 15 minutes. This can also be overridden on a per-feed basis.

Themes are installed by putting uniquely named .css files (and any supporting files & folder) in tt-rss's  directory. I decided to try clean-greader for a Google Reader-like experience. It works great, but I'm not happy with some of it, especially its thumbnail-izing of the first image in the feed content, so I use the Actions > Preferences > Customize button and paste in this CSS:

/* use a wider view for 1680px width screens, rather than 1200px (see also 1180px setting below) */
 * 1) main { max-width: 1620px; }

/* preferences help text should be formatted like tt-rss.css says, and make it smaller & italic */ div.prefHelp { color : #555; padding : 5px; font-size: 80%; font-style: italic; }

/* tidy up feed title bar, especially to handle feed icons, which come in wacky sizes */ img.tinyFeedIcon { height: 16px; } div.cdmFeedTitle { background-color: #eee; padding-left: 2px; height: 16px; } a.catchup { padding-left: 1em; color: #cdd; font-size: 75%; font-style: italic; }

/* Narrower left margin (44px instead of 71px), greater width (see also #main above) */ .claro .cdm.active .cdmContent .cdmContentInner, .claro .cdm.expanded .cdmContent .cdmContentInner { padding: 0 8px 0 50px; max-width: 1180px; }

/* main feed image is often real content, e.g. on photo blogs, so don't shrink it */ .claro .cdm.active .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img, .claro .cdm.active .cdmContent .cdmContentInner p:first-of-type img, .claro .cdm.active .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child, .claro .cdm.expanded .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img, .claro .cdm.expanded .cdmContent .cdmContentInner p:first-of-type img, .claro .cdm.expanded .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child { float: none; margin: 0 0 16px 0 !important; max-height: none; max-width: 100%; }

/* scroll bars are too hard to see by default */
 * -webkit-scrollbar-track {

background-color: #ccc; }
 * -webkit-scrollbar-thumb {

background-color: #ddd; }

py-fail2ban
After installing the port, create  with the following contents:
 * 1) Fail2Ban configuration file
 * 2) Author: Michael Gebetsroither, amended by Mike J. Brown
 * 3) This is for blocking whole hosts through blackhole routes.
 * 4) PRO:
 * 5)   - Works on all kernel versions and as no compatibility problems (back to debian lenny and WAY further).
 * 6)   - It's FAST for very large numbers of blocked ips.
 * 7)   - It's FAST because it Blocks traffic before it enters common iptables chains used for filtering.
 * 8)   - It's per host, ideal as action against ssh password bruteforcing to block further attack attempts.
 * 9)   - No additional software required beside iproute/iproute2
 * 10) CON:
 * 11)   - Blocking is per IP and NOT per service, but ideal as action against ssh password bruteforcing hosts
 * 1)   - No additional software required beside iproute/iproute2
 * 2) CON:
 * 3)   - Blocking is per IP and NOT per service, but ideal as action against ssh password bruteforcing hosts
 * 1)   - Blocking is per IP and NOT per service, but ideal as action against ssh password bruteforcing hosts

[Definition]

actionstart =
 * 1) Option:  actionstart
 * 2) Notes.:  command executed once at the start of Fail2Ban.
 * 3) Values:  CMD

actionstop =
 * 1) Option:  actionstop
 * 2) Notes.:  command executed once at the end of Fail2Ban
 * 3) Values:  CMD

actioncheck =
 * 1) Option:  actioncheck
 * 2) Notes.:  command executed once before each actionban command
 * 3) Values:  CMD

actionban  = route -q add  127.0.0.1
 * 1) Option:  actionban
 * 2) Notes.:  command executed when banning an IP. Take care that the
 * 3)          command is executed with Fail2Ban user rights.
 * 4) Tags:    See jail.conf(5) man page
 * 5) Values:  CMD

actionunban = route -q delete  127.0.0.1
 * 1) Option:  actionunban
 * 2) Notes.:  command executed when unbanning an IP. Take care that the
 * 3)          command is executed with Fail2Ban user rights.
 * 4) Tags:    See jail.conf(5) man page
 * 5) Values:  CMD

[Init]

blocktype = -blackhole
 * 1) Option:  routeflags
 * 2) Note:    Space-separated list of flags, which can be -blackhole or -reject
 * 3) Values:  STRING

Also create. In it, you can override examples in, and add your own:

[apache-badbots] enabled = true filter = apache-noscript action = bsd-route sendmail-buffered[name=apache-badbots, lines=5, dest=root@yourdomain] logpath = /var/log/www/*/*error_log

[apache-noscript] enabled = true filter = apache-noscript action = bsd-route sendmail-whois[name=apache-noscript, dest=root@yourdomain] logpath = /var/log/www/*/*error_log

[sshd] enabled = true filter = bsd-sshd action = bsd-route sendmail-whois[name=sshd, dest=root@yourdomain] logpath = /var/log/auth.log maxretry = 6

[sendmail] enabled = true filter = bsd-sendmail action = bsd-route sendmail-whois[name=sendmail, dest=root@yourdomain] logpath = /var/log/maillog

Be sure to replace yourdomain. Check for errors with the command

In, add the line   and then run

Disable any cron jobs that were doing work that you expect fail2ban to now be doing.

Check your log rotation scripts to make sure they create new, empty files as soon as they rotate the old logs out. Apache HTTPD, for example, won't create a new log until there's something to put in it, and if fail2ban notices the logfile is missing for too long, it will disable the jail.

spamilter
I thought I'd try SPF checking at the MTA. An SPF failure is when the envelope sender domain has published an SPF record that does not authorize mail from such a sender to arrive from the actual host that's connecting to my mail server. I feel these failures should just result in immediate rejection; no need to send such messages on to users.

However, this will "break" raw message forwarding, where legitimate email is relayed through a random host (such as via a .forward or my MUA's "bounce" command). This is a big downside to SPF.

I see that SpamAssassin has a small SPF whitelist for some major domains like Amazon. I guess this means these domains don't keep their SPF records up-to-date?

spamilter is a Sendmail filter that can do a few types of anti-spam checks, including SPF, if so configured.

When installing the port, enable LIBSPF. Other options are PAM and SMTP_AFTER_POP3. I don't know what these do!

Add to : !Spamilter
 * .=info    /var/log/spamilter.log
 * .<>info   /var/log/spamilter.err

Tell syslogd to reload the config:

Add to :

More setup:
 * somewhere publicly accessible
 * somewhere publicly accessible
 * somewhere publicly accessible
 * somewhere publicly accessible
 * somewhere publicly accessible

In spamilter.rc, I want SPF checking only:
 * set PolicyUrl to the public URL for policy.html
 * set DnsBlChk to 0
 * set SmtpSndrChk to 0
 * set MtaHostChk to 0
 * set MtaSpfChk to 1
 * set MsExtChk to 0
 * delete the extra MtaSpfChk line!

Start the daemon:

Hook the milter into sendmail:
 * cd /etc/mail
 * Add to :
 * make && make restart

Unfortunately, after doing all this, I can't seem to get it to actually do anything! No SPF checking seems to be happening; nothing is rejected. So I don't know what the deal is.

Upgrading specific ports
Certain installed ports (3rd-party software packages) require extra attention when you want to update them with portmaster. Because of this, you can't just update all of your third-party software in one swoop; it's best to do certain ones separately. Here are some notes for the more difficult ones I ran across.

Upgrade Perl and Perl modules
Instructions for major and minor version updates are separate entries in /usr/ports/UPDATING. One thing they didn't make at all clear is that (prior to 2013-06-12),  is supposed to be run after updating modules; it won't find anything to do otherwise. So, to go from 5.12 to 5.16, I did this:
 * 1) Inspect the old version's folders under   and  . Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.
 * 1) Inspect the old version's folders under   and  . Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.
 * 1) Inspect the old version's folders under   and  . Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.
 * 1) Inspect the old version's folders under   and  . Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.

When there's a perl patchlevel update (e.g. 5.16.2 to 5.16.3), UPDATING might tell you to upgrade everything Perl-related via. I'm not a big fan of this. Somehow, pretty-much everything on the system is tied to Perl, including Apache, MediaWiki, you name it. I don't understand why.

It is possible to upgrade just Perl itself, and the modules:

perl-after-upgrade doesn't exist anymore. Starting with Perl 5.12.5 / 5.14.3 / 5.16.3, they dropped the patchlevel from the folder names in  and , and the installer handled it automatically.

Update SpamAssassin and related
You don't need to do this if you've chosen instead to update all Perl modules, above.

For the options I've chosen, this will update various Perl modules, gettext, libiconv, curl, libssh2, ca_root_nss, gnupg1.

The port is rather clumsy in that it deletes, so after the update, I have to re-import the GPG key for the "sought" ruleset.



I asked about this on the mailing list, and cc'd the port maintainer, but no word yet.

If everything has installed correctly, restart sa-spamd when it's done. It probably stopped running during the install.

Update MySQL
The actual databases shouldn't be affected by a minor version bump of MySQL. But of course, you should still consider making a fresh backup first:

This updates,  , and  :

The server will be stopped automatically during the software update. Restart it, then update the tables:

You should make sure MediaWiki and any other MySQL-dependent apps still work after doing this.

MySQL backup script
This simple script I wrote keeps a week's worth of daily backups of the database. I run it every day via.

MYSQLUSER and MYSQLPASSWD must be set to real values, not XXXXX and YYYYY; and DUMPDIR and ARCHIVEDIR must point to writable directories.

If there's a more secure way of handling this, let me know!


 * 1) !/bin/sh

DUMPDIR=/usr/backup/mysql/daily ARCHIVEDIR=/usr/backup/mysql/archive MYSQLUSER=XXXXX MYSQLPASSWD="YYYYY" ARCHIVEDAY=7
 * 1) Monday=1, Sunday=7

DATE=`/bin/date "+%Y%m%d"` BZIP=/usr/bin/bzip2 DUMPER=/usr/local/bin/mysqldump DAYOFWEEK=`/bin/date "+%u"`

set clobber if [ -d ${DUMPDIR} -a -w ${DUMPDIR} -a -x ${DUMPER} -a -x ${BZIP} ] ; then OUTFILE=${DUMPDIR}/mysql-backup-${DATE}.sql.bz2 echo "Backing up MySQL databases to ${OUTFILE}..." # -E added 2013-04-17 to get rid of warning about events table not being dumped ${DUMPER} -E -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases | ${BZIP} -c -q > ${OUTFILE} else echo "There was a problem with ${DUMPDIR} or ${DUMPER} or ${BZIP}; check existence and permissions ." exit 1 fi

if [ -d ${ARCHIVEDIR} -a ${DAYOFWEEK} -eq ${ARCHIVEDAY} ] ; then echo "It's archive day. Archiving ${OUTFILE}..." /bin/cp -p ${OUTFILE} ${ARCHIVEDIR} echo "Deleting daily backups older than 1 week..." /usr/bin/find ${DUMPDIR} -mtime +7 -exec rm -v {} \; fi

One downside of this script is that even on my small database, it takes a little while to run, like 15 minutes or so. While it's running, the database tables are locked (read-only). If you try to use MediaWiki, the browser will just hang until the dump is finished. So I temporarily take the wiki offline by making the wiki domain's root directory be a symlink to a directory that contains a "wiki temporarily offline" index.html and a .htaccess with the following: RewriteEngine On RewriteCond %{REQUEST_URI} ^/.+ RewriteRule / http://offset.skew.org/ [R]

Update Apache
Updates automake, apr, apache22

You will need to restart Apache afterward.

Update MediaWiki
General info: MediaWiki Manual: Upgrading

This is updating the Mediawiki code (PHP, etc.), not the database.

You probably want to make a backup first. I already have daily MySQL backups, so I just do this: The new installation actually shouldn't clobber your old LocalSettings or anything else; the backup is just in case. However, any extensions probably need to be reinstalled because they're often tied to a specific version of MediaWiki.

This updates php (+related), imagemagick (+related), freetype (+related)

Assuming the above went well:
 * make sure there's nothing special in /usr/local/www/mediawiki/UPGRADE

Manually install appropriate versions of all of the extensions (for me, that's Gadgets, Nuke, ParserFunctions, Renameuser, Vector, ConfirmEdit, CheckUser, Cite, WikiEditor). Assuming there are no changes required in LocalSettings.php, this just involves unzipping them into the Extensions directory. The site where you get the extensions has installation instructions.

Blank pages after upgrading PCRE
In February 2014, after upgrading PCRE to 8.34 or higher, Mediawiki versions prior to 1.22.1 will serve up articles with empty content. This is due to a change in PCRE 8.34 that necessitates a patch to Mediawiki and a cache purge.

Symptoms:
 * empty content when viewing pages, but edit boxes have the content
 * HTTP error log shows these messages:

For reference:
 * Here's the Mediawiki bug report
 * Here's the patch (sorta) - I had to just copy-paste the  and   lines into   around line 706 (exact spot varies), replacing the old   line.

The fix takes effect immediately, but it doesn't affect cached pages, which will probably be any pages that were visited by anyone during the time the problem was happening. If you know what all these pages are, you can purge their cached copies one by one if you visit each one while logged in and load the page with  appended to the URL. Obviously, this is not convenient if most of your wiki is affected.

Instead, I did a mass purge by using the PurgeCache extension to do it. This required creating the  folder and installing 4 files into it. Then I had to go to my user rights page at  and add myself to the developer group (which is deprecated, incidentally; another alternative would be to change the extension's code to require sysop group instead). Finally, I visited  and clicked the button to finish the cache purge.

Via web interface
Updating tt-rss can be done from within the web interface, when logged in as Admin. Of this will mean the port is out of date, but I wanted to try it to see if it works. It does, but in the future I think I'll just use the port to update it.

First, make a backup: Now give tt-rss write permission: It will make its own backup. The update will be a fresh installation in the tt-rss directory. When the update is done, copy your themes and any other customized files over from the backup. I'd undo the permission change as well: This might be a good time to check to see if your themes also need to be updated.

Follow the instructions below to merge config.php changes and update the database.

Via ports
You can use portmaster on it like normal. However, it will probably cause some PHP and its modules to update, and it will overwrite the old tt-rss installation. It does leave your config.php alone, but it's up to you to merge in any changes from config.php-dist.

To do an interactive merge:

Now edit config.php, and set SINGLE_USER_MODE to. Visit the site and see if you're prompted to do a database upgrade. If so, click through.

If everything is working, restart the feed update daemon:

Edit config.php to set SINGLE_USER_MODE back to, and test again.

Upgrade to pkgng
In November 2013, I decided to upgrade from the stock pkg_install tools to the new pkgng, aka. I followed the instructions in the announcement and all went well, except I had to write to the author of that announcement to learn that he meant to write  instead of. If you include the quotes, the  command will warn about the value not being a boolean.

pkgng replaces the pkg_install tools, including,  , and. It doesn't remove them from your system; you just have to remember not to use them. Putting  in your   tells portmaster and other tools to use the new tool, , which has a number of subcommands, e.g..

Incompatibility with portmaster
I was hoping to also use packages when I upgrade my ports, but as of mid-December 2013, running  with the   or   option results in a warning:.

HTTPS support
Apache comes with HTTPS support (SSL) disabled by default. It's not too hard to enable, but configuration does require some effort, especially for a public server with name-based virtual hosts (i.e., serving different websites with different configurations as directed by the HTTP "Host:" header in incoming requests).

First you need an SSL certificate (cert). For a public server you don't want to use a self-signed cert; nobody will install it into their browser/OS's certificate store, and even if they do, their browser may still warn about how crappy the security is—the cipher may be strong, but no one can vouch for the cert's authenticity and trust. It's hard to explain, but it's kind of like how in journalism, a news outlet is unreliable if they don't publish corrections. A self-signed cert can't be revoked, for example if the server's private key is disclosed, but a "real" cert signed by a Certificate Authority (CA) can be.

Various services offer free SSL certificates for S/MIME (e-mail encryption), but these are specialized certificates that can't be used for web servers. There is (as of 2013) only one widely trusted CA which gives out free SSL certificates for server authentication: StartCom. Their "StartSSL™ Free" "Class 1" certificate only requires that they verify your domain (by sending a validation code to postmaster@yourdomain.org, and you copy-paste the code into their web form). This free cert is valid for one year, and is only for one fully qualified domain name and the domain name itself. If you need the cert to be for more hosts, e.g. a wildcard cert, then you have to pay.

To get a certificate, generally speaking, you have to first generate a private key (basically a random number + optional passphrase), then use the private key to generate a Certificate Signing Request (CSR), then submit the CSR to the CA (StartCom, in this case). StartCom has a web form for doing all of that on their end, but they also give you the option of skipping it and copy-pasting your own CSR. I prefer generating my own CSR because it's theoretically safer: no one has the private key but me.

Generate a private key
Some considerations:
 * Use a passphrase? No. This would make it more secure, but then you'd have to enter it every time Apache is started or sent a SIGHUP.
 * How many bits? Some tutorials say 1024, but 2048 is pretty standard now, and is the minimum accepted by StartCom. More bits means more CPU cycles needed for encryption, so I'm hesitant to use 4096 (my server is running on old hardware), lest it slow things down too much. However, I've read that encryption overhead really isn't that high, even on busy servers, so maybe it's no big deal to use 4096.

Generate a CSR
You'll be prompted to enter country, state/province, locality, organization name, organizational unit name—these can be blank or filled in as you wish (although I found that I had to enter country/state/locality). Then you enter the Common Name, which for a StartCom cert should be your domain only (yourdomain.org), not a FQDN (somehost.yourdomain.org). StartCom will make us change it, but they'll still make use of what we enter here. For a "real" wildcard cert (e.g. from SSL2BUY), you need to enter the wildcard (*.yourdomain.org).

You'll also be prompted to enter an email address that will be in the cert; I suggest something that works but isn't too revealing, like root@yourdomain.org.

If prompted for a challenge password, this is a password that you create and give to the SSL issuer (StartCom or whoever). They can then use it in order to verify you in future interactions with them. It's a way to protect against someone impersonating you when they talk to the issuer.

Optional company name is probably for if your company is requesting the cert on behalf of someone else; you could enter your company name here; I just leave it blank.

Now you have a text file,, the contents of which you'll copy-paste into StartCom's form.

Get a free cert from StartCom
Turn off any ad or script blockers when accessing startcom.com. If you're new there, you'll have to verify an email address (doesn't matter what it is, as long as you can get the code they send you) and paste a validation code into a form. Then they'll try to make your browser accept an SSL cert for authenticating you. Think of it as an extra-special cookie.

Once you're in, use their Validations Wizard to validate your domain. This requires them sending a validation code to an email address at the domain in question (e.g. postmaster@yourdomain.com), and then you copy-paste the code into a form. After the domain is validated, use the Certificates Wizard, pasting in your CSR text.

StartCom's Class 1 cert must be for a fully qualified domain name (FQDN, like somehost.yourdomain.org). If you generate the CSR for the bare domain (yourdomain.org) and paste it in, the wizard makes you pick a FQDN. The resulting cert has one Common Name (CN):, and one Subject Alternative Name (meaning, "also good for this domain"):. (I don't know what happens if you generate the CSR for . Maybe it's the same?)

Then they give you your requested cert, along with download links for  and. You will need to tell Apache where all three of these files are. See StartCom's Apache installation instructions to get a sense of what you'll be doing.

Configure Apache HTTPD
Put,  ,   and   in   (or wherever).

Edit  and just uncomment this line: Include etc/apache22/extra/httpd-ssl.conf

Edit  and comment out the  ...  and its contents (aside from what's already commented-out). Here's the general idea of what you should add instead:

Enable name-based virtual host configs: NameVirtualHost *:443

Set up an alias for a desired access-log format. I want to use the standard "combined" format, with a couple of SSL-specific details appended: LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x" combined_plus_ssl

For each of the domains named in the certificate, you need a virtual host entry. You are mainly duplicating your httpd-vhosts.conf entries, but for port 443, with SSL stuff added, and (probably) different log file locations and formats.

In HTTPS, the client first establishes an unencrypted connection to port 443 at the server's IP address. This is just in order to negotiate encryption. Once this is done, the actual HTTP request is decrypted and handled.

When using a non-SNI-capable browser, the initial, unencrypted connection does not have a hostname/domain (identifying the desired website) associated with it, so the first  entry that matches the IP address and port 443 will be handling it, and the certificate defined in that entry must be the same as the one in the entry that will be handling the actual HTTP request. The HTTP-handling entry could be the same entry as the initial connection-handling entry, or it could be separate.

When the connection comes from an SNI-capable browser, then it will probably have a hostname/domain, so an SNI-capable server (like Apache 2.2.12 and up, built with OpenSSL 0.9.8j and up, which is standard since mid-2009) will simply use the  entry with the corresponding ServerName for both the initial connection and the actual HTTP request.

Once the encrypted connection is established, the rest of the communication is ordinary HTTP requests that arrive encrypted. These are sent to port 443 at the same IP address, and are decrypted and handled like normal. They should contain a  header to specify the hostname/domain. So the first  entry can do double-duty, handling the HTTP service for one of these domains:

 ServerName yourdomain.org:443 SSLEngine on   SSLProtocol all -SSLv2 #SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/usr/local/etc/apache22/ssl.crt" SSLCertificateKeyFile "/usr/local/etc/apache22/server.key" SSLCertificateChainFile "/usr/local/etc/apache22/sub.class1.server.ca.pem" SSLCACertificateFile "/usr/local/etc/apache22/ca.pem" DocumentRoot "whatever" CustomLog "whatever" combined_plus_ssl ErrorLog "whatever" LogLevel notice 
 * 1) This one will be for any encrypted requests on *:443 with
 * 2) "Host: your.org:443" headers.
 * 3) By virtue of being first, this entry also applies to the initial connection on
 * *:443 (for non-SNI clients), and encrypted requests on *:443 with a missing or
 * 1) unrecognized Host header.
 * 1) unrecognized Host header.

Now, actually, since writing this, I've switched over from the free StartCom cert to a non-free wildcard cert from another issuer. I also moved some files around, so in my Apache config, I'm actually pointing to different locations for the certificate and key files. Notably, there's no SSLCertificateChainFile needed for the way this particular issuer has people set it up; instead, the root and intermediate cert from this issuer are together in a single file referenced by SSLCACertificateFile.

You're going to want LogLevel to be notice or higher, because there's a lot of noise in the SSL info-level messages.

Of course  can be replaced with a specific IP address, if you want.

The rest of the entries are only for the specific  headers, so make sure there's one for the Common Name:

 ServerName somehost.yourdomain.org:443 SSLEngine on   SSLProtocol all -SSLv2 #SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/usr/local/etc/apache22/ssl.crt" SSLCertificateKeyFile "/usr/local/etc/apache22/server.key" SSLCertificateChainFile "/usr/local/etc/apache22/sub.class1.server.ca.pem" SSLCACertificateFile "/usr/local/etc/apache22/ca.pem" DocumentRoot "whatever" CustomLog "whatever" combined_plus_ssl ErrorLog "whatever" LogLevel notice
 * 1) This one will be for any encrypted requests on *:443 with
 * 2) "Host: somehost.yourdomain.org:443" headers, and for the initial
 * 3) connection on *:443 by SNI-capable clients wanting somehost.yourdomain.org.
 * 4) Don't forget to mirror any non-SSL, non-log changes here
 * 5) with the corresponding *:80 entry in httpd-vhosts.conf.
 * 1) with the corresponding *:80 entry in httpd-vhosts.conf.

Ref (non-SNI): https://wiki.apache.org/httpd/NameBasedSSLVHosts Ref (SNI): https://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI

It's a good idea to have entries for any other domains hosted on the same IPs. That is, every HTTP website should have some kind of HTTPS service as well. This has a couple of ramifications:
 * You will have to keep the    entries in sync with the   ones.
 * When people try to access the HTTPS versions of sites that the certificate isn't valid for, they'll get warnings in their browsers. If they choose to accept the certificate anyway, what do you want to do? In my opinion, the best thing to do is redirect to an HTTPS site that the certificate is good for, or if there's no such option, just redirect to the regular HTTP site:

 ServerName www.yourdomain.org:443 SSLEngine on   SSLProtocol all -SSLv2 #SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/usr/local/etc/apache22/ssl.crt" SSLCertificateKeyFile "/usr/local/etc/apache22/server.key" SSLCertificateChainFile "/usr/local/etc/apache22/sub.class1.server.ca.pem" SSLCACertificateFile "/usr/local/etc/apache22/ca.pem" DocumentRoot "whatever" Redirect / https://yourdomain.org/ CustomLog "whatever" combined_plus_ssl ErrorLog "whatever" LogLevel notice 
 * 1) We don't want people to accept the certificate if they're actually
 * 2) trying to access other hosts. If they accept the cert anyway, we
 * 3) redirect them to the appropriate, probably non-SSL locations.
 * 4) Hopefully this won't interfere with HTTPS Everywhere.
 * 1) Hopefully this won't interfere with HTTPS Everywhere.

 ServerName non-ssl-host.yourdomain.org:443 SSLEngine on   SSLProtocol all -SSLv2 #SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/usr/local/etc/apache22/ssl.crt" SSLCertificateKeyFile "/usr/local/etc/apache22/server.key" SSLCertificateChainFile "/usr/local/etc/apache22/sub.class1.server.ca.pem" SSLCACertificateFile "/usr/local/etc/apache22/ca.pem" DocumentRoot "whatever" Redirect / http://non-ssl-host.yourdomain.org/ CustomLog "whatever" combined_plus_ssl ErrorLog "whatever" LogLevel notice 

Regarding the commented-out line, I just put the  in one place, outside of the VirtualHost configs, so it applies to all of them.

See if it works

 * Visit your web sites with https URLs and see what happens.
 * Use a third-party SSL checker like SSLShopper's SSL Checker.
 * If you use Firefox or Chrome, install the HTTPS Everywhere extension, create a custom ruleset for it, then see if you get redirected to the https URL when you try to visit the http URL of your web site.

Something else to check for is mixed content. Ideally, an HTTPS-served page shouldn't reference any HTTP-served scripts, stylesheets, images, videos, etc.; browsers may warn about it. Replace any  links in your HTML with relative links (for resources on the same site) or   links (for resources that are verifiably available via HTTPS). For example, in MediaWiki's LocalSettings.php, I had to change $wgRightsUrl and $wgRightsIcon to use  URLs. There may still be some external resources which are only available via HTTP, but if they're outside your control, there's nothing you can do about that.

HSTS
HSTS is a lot like HTTPS Everywhere, but it comes standard in modern browsers. You enable HSTS on the server just by having it send a special header in its HTTPS responses. The header tells HSTS-capable browsers to only use HTTPS when accessing the site in the future. In the main configuation, you need On my system, this was already enabled. Then, in the  section for each HTTPS site (not regular HTTP!), you need

Test it in your browser by disabling HTTPS Everywhere (if installed), then visit the HTTPS website, then try to visit the HTTP version of the site. The browser should change the URL back to use HTTPS automatically.

CRIME attack mitigation
This is an easy one. Just ensure TLS compression is not enabled. It normally isn't enabled, but just in case:

BEAST attack mitigation

 * requires combo of  and
 * use TLS 1.1 or higher, or (for TLS 1.0) only use RC4 cipher
 * you can't specify "RC4 for TLS 1.0, but no RC4 for TLS 1.1+" in mod_ssl
 * TLS 1.1+ can still be downgraded to 1.0 by a MITM!
 * RC4 has vulnerabilities, too!
 * Apache 2.2 w/mod_ssl is normally built w/OpenSSL 0.9.x, supporting TLS 1.0 only!

But wait, read on...

Perfect forward secrecy
Cipher suites using Diffie-Hellman key exchange ("DH") provide forward secrecy. "Perfect" forward secrecy (PFS) is an enhanced version of this policy.
 * it ensures session keys can't be cracked if private key is compromised
 * it requires ephemeral Diffie-Hellman key exchange ("EDH" or "DHE"), optionally with Elliptic Curve cryptography ("ECDHE" or "EECDH") to reduce overhead
 * ECDHE requires Apache 2.3.3+! (it's OK to leave it listed in 2.2's config though)
 * browser support varies

The basic config of gives me a pretty nice report with lots of green "Forward Secrecy" results on the Qualsys SSL Labs analyzer.

This gets more complicated if you want to mitigate the BEAST attack. There are suggestions for dealing with it through the use of SSLCipherSuite directives that prioritize RC4 if AES isn't available. However, this is not good for Apache 2.2, because you'll probably end up disabling forward secrecy for everyone.

Reference for : here (click). It may help to know that on the command line, you can do  followed by the same parameters you give in the   directive, and it will tell you what ciphers match.

SMTP Authentication and STARTTLS support in Sendmail
FreeBSD comes with sendmail installed in the base system, but without support for STARTTLS (the SMTP command that sets up encryption). This means you get no encryption support at all until you rebuild sendmail with support for the SASL libraries. (sendmail also exists in the ports collection, but that's mainly just for helping older systems upgrade their sendmail installations.)

Set up authentication
In order to set up encryption, you first have to rebuild sendmail with support for the SASL libraries. It so happens that enabling SMTP Authentication also requires the SASL libraries, so I found it was pretty easy to just follow the instructions in the SMTP Authentication section of the FreeBSD Handbook.

Where the handbook refers to, I made sure to use   instead.

The handbook also suggests increasing the log level from its default of 9, but doesn't say how. You do it by adding this to the .mc file:

dnl log level define(`confLOG_LEVEL', `13')dnl

At this point, do the  as directed, just to make sure nothing broke. sendmail should start up quietly. Maybe send yourself a test message and make sure you can still receive mail OK. Feel free to tail the mail log and see what it says.

The outcome here, if I understand correctly, is this:
 * SMTP clients (email programs) can now ask to interact with my server as a local user (with their login password), in order to use my server as a relay for their outbound mail. (Your ISP may not appreciate this; I know mine insists that people use the ISP's own relays exclusively.)

Previously, to allow relaying, I had set up each user's home IP address as a valid  in. Obviously authentication is better. However...

I think the handbook's advice, as given, is rather dangerous, because it says to override the default authentication methods, which the documentation currently says are GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5. The handbook's advice omits KERBEROS_V4, which is no big deal, but then it also adds the LOGIN authentication method, which transmits the username and password in the clear (well, base64-encoded), which is a big deal if the connection isn't yet encrypted.

Regardless of whether you leave LOGIN (or PLAIN) in there, but especially if you do, I strongly suggest you also add this to the .mc file:

dnl SASL options: dnl f = require forward secrecy dnl p = require TLS before LOGIN or PLAIN auth permitted dnl y = forbid anonymous auth mechanisms define(`confAUTH_OPTIONS',`f,p,y')dnl

While you're in there, throw KERBEROS_V4 back in and change the comments to be more informative:

dnl authentication will be allowed via these mechanisms: define(`confAUTH_MECHANISMS', `GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl

dnl relaying will be allowed for users who authenticated via these mechanisms: TRUST_AUTH_MECH(`GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl

Set up encryption
Now that sendmail is SASL-ified, you can set up public-key encryption, i.e. support for the STARTTLS command. The command won't work until you tell sendmail where the private key and certificates are. So, in the .mc file add the following:

dnl certificate and private key paths for STARTTLS support define(`confCACERT_PATH', `/etc/mail/certs')dnl define(`confCACERT', `/etc/mail/certs/CAcert.pem')dnl define(`confSERVER_CERT', `/etc/mail/certs/MYcert.pem')dnl define(`confSERVER_KEY', `/etc/mail/certs/MYkey.pem')dnl define(`confCLIENT_CERT', `/etc/mail/certs/MYcert.pem')dnl define(`confCLIENT_KEY', `/etc/mail/certs/MYkey.pem')dnl

Also create the referenced directory and files. They must be readable only by owner, and symlinks are OK:

Now  and tail the mail log, watching for errors. Also run the tests at checktls.com ... for me, everything worked on the first try!

Outcomes:
 * SMTP clients (email programs and mail relays) that connect to my server anonymously in order to hand off mail for my users (or for other domains I relay to) can now request encryption and communicate securely.
 * My SMTP server, when connecting to a remote SMTP server in order to deliver mail from my users, can now request encryption and communicate securely.

Certificate limitations
I have read that not all certificates work for STARTTLS.

Apparently you can run  to see what "purposes" your cert is approved for. Here's the output for my AlphaSSL wildcard cert:

Certificate purposes: SSL client : Yes SSL client CA : No SSL server : Yes SSL server CA : No Netscape SSL server : Yes Netscape SSL server CA : No S/MIME signing : No S/MIME signing CA : No S/MIME encryption : No S/MIME encryption CA : No CRL signing : No CRL signing CA : No Any Purpose : Yes Any Purpose CA : Yes OCSP helper : Yes OCSP helper CA : No

I suspect "SSL client : Yes" is crucial.

Here's the output for my StartCom free cert: Certificate purposes: SSL client : No SSL client CA : No SSL server : Yes SSL server CA : No Netscape SSL server : Yes Netscape SSL server CA : No S/MIME signing : No S/MIME signing CA : No S/MIME encryption : No S/MIME encryption CA : No CRL signing : No CRL signing CA : No Any Purpose : Yes Any Purpose CA : Yes OCSP helper : Yes OCSP helper CA : No

Client certificate verification
What good is encryption if the client is being impersonated by some Man-in-the-Middle (MITM) who is choosing his favorite cipher and sending you his public key? The way to defend against this is to verify the client. But you also have to figure out what to do with unverifiable clients.

Certificates for trusted clients or their CAs are required on the server
Unless you configured the server not to request a certificate from the client, it will ask for one, and it will tell the client "I'm prepared to accept a certificate signed with these CA root certificates..." The certs it will accept are the root certs and self-signed certs that are in the  file, plus those that you have symlinks for in the   directory. The client will then decide whether it wants to offer the server a cert at all.

The Sendmail Installation and Operation Guide says you can't have the server accepting too many root certs, because the TLS handshake may fail. But it doesn't say how many is too many; it just says only include the CA cert that signed your own certs, plus any others you trust. I take this to mean that I'm not supposed to include the whole the Mozilla root cert bundle, i.e., as installed by the security/ca_root_nss port (which is maybe already on the system, as it is needed by curl, SpamAssassin, gnupg, etc.).

To verify a client cert signed by a CA, you need a copy of the CA root certificate and any intermediate certificates to be on the system. As many certs as you want can be concatenated together in the  file, or they can be in separate files represented by symlinks, named for the cert's hash, in the   directory. If intermediate certificates are present, they can be in separate files, too, or they can have the higher-level certs, on up to the root, concatenated to them in one file; e.g. GoDaddy has a  file available for this purpose, with the contents of   followed by the contents of  ; the hash will be for the first cert in the bundle (i.e., the lowest-level intermediate cert).

To verify a self-signed client cert, I believe you need a copy of the self-signed cert to be on the system; it is treated like a CA root cert. It can live in the file with the root certs or it can have a symlink in the  directory.

Here is how to generate the appropriate symlink (but replace both instances of cert.crt with the path to the appropriate file):

Verification results
When your server receives email via an encrypted connection, you will see something like this in the  headers:

Here are the possible client certificate verification codes:
 * means that the verification succeeded.
 * means that the server didn't ask for a cert, probably because it was configured not to.
 * means that the server asked for a cert, but the client didn't provide one, or it didn't provide the intermediate and root certs along with the client cert. Maybe the client isn't configured to send the whole bundle, or it doesn't have a client cert to provide, or maybe the client didn't like the list of acceptable CA root certs the server offered. This code is not cause for concern unless you were expecting to be able to verify that client because you have the necessary certs installed.
 * means that the server asked for a cert, and the client provided one that couldn't be verified. Maybe it's expired, or the server doesn't have the necessary root and intermediate certs, or the certs it has don't have signatures that match those presented, or one of the certs presented is listed in the CRL file (if any).
 * Other codes are  (no STARTTLS command issued),   (TLS handshake failure),   (SMTP error), and   (temporary, unspecified error).

By default, Sendmail doesn't care what the code is; it'll proceed with the transaction anyway, if possible. Depending on your needs, you can configure Sendmail to react to these codes; that's something yet to be filled in.

The biggest caveat
On a public MX host, you're required (by RFC 3207) not to reject relaying through unencrypted connections, so you can't really do much verification of clients.

A client may present you with valid certs, but if you don't have the necessary certs installed to verify them, that's your fault, not the client's. And you can't say that  is reason to refuse delivery, but then accept any other non-  codes. I mean, what's to stop the client from just trying again and deliberately triggering one of the other codes? e.g. it could not use STARTTLS at all, or not send a cert.

So really there's only a few choices (pick one):
 * Don't attempt verification at all.
 * Attempt verification of a handful of trusted hosts & root CAs, but only for informational purposes.
 * Require encrypted connections, attempt verification of a handful of trusted hosts & root CAs, and disallow relaying for those that don't get . This is not an option for public servers.

Sendmail encryption related documentation of note
Official Sendmail docs:
 * /usr/share/sendmail/cf/README - massive doc explaining .mc & .cf files and all the options therein. Current copy online at MIT.
 * /usr/share/sendmail/cf/cf/knecht.mc - Eric Allman's .mc file with many interesting things in it
 * (this is where it ends up on FreeBSD:) /usr/src/contrib/sendmail/doc/op/op.me - troff source for the Sendmail Installation and Configuration Guide. On FreeBSD there's a Makefile in that folder, so you can  to generate PostScript, ASCII (ugly), and PDF copies. A recent but not-quite-current PDF copy is at sendmail.com. No one else seems to have it online, and very few sites refer to it, yet it's indispensable!

FreeBSD-specific:
 * /etc/mail/README - Mainly just explains how to work around an issue with getting it to work with jails.
 * SMTP Authentication - outdated chapter of the FreeBSD Handbook. The instructions for rebuilding Sendmail are good for enabling STARTTLS and AUTH, at least, but these docs need work.

Useful guides:
 * Secured Sendmail with SMTP Authentication Guillaume "yom" Bibaut's HOWTO
 * My Experiences (So Far) with STARTTLS and Sendmail Weldon Whipple's trials and tribulations; covers certificate and other stuff in more depth than most, but also somewhat outdated (c. 2002)

Cyrus SASL-related:
 * http://www.postfix.org/SASL_README.html#server_cyrus - 95% good info about SASL, 5% Postfix-specific stuff you can ignore
 * Configuring SASL - Wayne Pollock's mostly excellent overview, only the "Available Mechanisms: PLAIN" section is outdated; the saslauthd man page explains what's really available.
 * Cyrus SASL for System Administrators - Claus A&#233;mann's current docs

TLS/SSL and certificates:
 * SMTP STARTTLS in sendmail/Secure Switch - Claus A&#233;mann's current docs, not all that helpful.
 * IBM's WebSphere MQ documentation has a great general explanation of certificates. Ignore the MQ-specific stuff.
 * OpenSSL Command-Line HOWTO - Paul Heinlein's invaluable doc

Enable a caching DNS server
FreeBSD comes with BIND preconfigured to be a caching DNS server listening on 127.0.0.1, but it is disabled by default. If you enable it, you'll reduce traffic to/from your ISP's DNS server.


 * Add  to
 * Uncomment the  section of   and put your ISP's nameserver addresses in it.
 * In, replace your ISP's nameserver addresses with 127.0.0.1

Test it:



The first line of output should say  and the lookup should succeed.

At this point you are just forwarding; anytime you look up a host not yet in the cache, you are asking your ISP's nameserver to request it for you. It might pull it from its own cache.

Support RBLs
You are probably combatting spam by using RBLs, which rely on DNS queries to find out if a given IP is a suspected spammer.

Some RBL services block queries from the major ISPs, because they generate too much traffic. URIBL is an example of such a service.

To deal with this, after enabling the caching & forwarding DNS service as described above, you now need to disable forwarding for just the RBL domains. Then your server will query those domains' DNS servers directly. I believe it will work if you just add something like this to :

/* Let RBLs see queries from me, rather than my ISP, by disabling forwarding for them: */

// RBLs that are disabled but mentioned in my sendmail config zone "blackholes.mail-abuse.org" { type forward; forward first; forwarders {}; };

// RBLs that are enabled in my sendmail config zone "bl.score.senderscore.com" { type forward; forward first; forwarders {}; }; zone "zen.spamhaus.org" { type forward; forward first; forwarders {}; };

// RBLs that are probably enabled in SpamAssassin zone "multi.uribl.com" { type forward; forward first; forwarders {}; }; zone "dnsbl.sorbs.net" { type forward; forward first; forwarders {}; }; zone "combined.njabl.org" { type forward; forward first; forwarders {}; }; zone "activationcode.r.mail-abuse.com" { type forward; forward first; forwarders {}; }; zone "nonconfirm.mail-abuse.com" { type forward; forward first; forwarders {}; }; zone "iadb.isipp.com" { type forward; forward first; forwarders {}; }; zone "bl.spamcop.net" { type forward; forward first; forwarders {}; }; zone "fulldom.rfc-ignorant.org" { type forward; forward first; forwarders {}; }; zone "list.dnswl.org" { type forward; forward first; forwarders {}; };

NTP
For things to run smoothly, especially email, you need to keep your system's clock (the one that keeps track of the actual date/time) in sync with the outside world.

stock/classic/reference/Unix ntpd
When setting up FreeBSD via sysinstall, you're asked to pick a server for ntpdate to use. This sets  and   in /etc/rc.conf, which causes /etc/rc.d/ntpdate to run at boot time to set the clock once, with immediate results. You're expected to make it run daily, if not more often, via a script or cron job.

But wait, ntpdate is deprecated! See its man page. You're now supposed to run ntpd, which adjusts the time gradually, and can connect to remote NTP servers as often as it needs to.

Ideally, you have it running as a daemon, enabled via  in /etc/rc.conf. You could also or instead do a clock sync on-demand via, same as running. Either way, it uses /etc/ntp.conf for its configuration.

Rudimentary instructions for getting ntpd as a daemon are in the FreeBSD Handbook, but they don't cover security issues very well. In particular, you need this in your /etc/ntp.conf:

disable monitor restrict -4 default nomodify nopeer noquery notrap restrict -6 default nomodify nopeer noquery notrap restrict 127.0.0.1 restrict ::1
 * 1) 2013-2014: close off hole that lets people use the server to DDoS
 * 2) 1. disable monitoring
 * 1) 1. disable monitoring
 * 1) 2. before 'server' lines, use the following, as per
 * 2)    https://www.team-cymru.org/ReadingRoom/Templates/secure-ntp-template.html
 * 3) by default act only as a basic NTP client
 * 1) by default act only as a basic NTP client
 * 1) by default act only as a basic NTP client
 * 1) allow NTP messages from the loopback address, useful for debugging

The reason you need this is because this particular ntpd implementation listens on UDP port 123 all the time, exposing it to the outside world. It needs to keep that port open in order to work at all. You should try to reduce this exposure risk via "restrict" lines in ntp.conf; these can be used to say that only traffic purporting to be from certain hosts (the servers you want to get time info from) will be acknowledged. It wouldn't hurt to duplicate this info in your firewall rules. But! Read on...

Since I had bad luck with nearby NTP servers going offline over the years, I much prefer to use the pool.ntp.org hostnames as the servers to sync to. These pools, by nature, are always changing their IP addresses. Thus you can't use "restrict" lines or firewall rules to whitelist these IPs, because you don't know what they are!

OpenNTPD
After searching in vain for a way to use the pools securely, I gave up and decided to run openntpd from ports. This is much, much simpler.


 * In /etc/rc.conf:
 * In /etc/rc.conf:

ntpd_enable="NO" openntpd_enable="YES" openntpd_flags="-s"


 * You can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).


 * Logging is same as for the stock ntpd; just put this in /etc/syslog.conf:

ntp.*                                  /var/log/ntpd.log




 * Log rotation is probably desirable. Put this in /etc/newsyslog.conf:

/var/log/ntpd.log          644  3     *    @T00    JCN



You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:

ntp engine ready set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s) peer x.x.x.x now valid adjusting local clock by -0.046633s

Spamassassin config
See above re:
 * updating Spamassassin, which sometimes involves fixing things that break
 * setting up sa-utils for daily ruleset maintenance, and using the "sought" ruleset
 * enabling a caching, non-forwarding DNS server so RBL checks work

Here are some notes about the rest of my Spamassassin config.

v320.pre
There are a bunch of plugins that come with Spamassassin. Many are enabled by default via  lines in the various *.pre files. I enabled a couple more by uncommenting some more  lines in /usr/local/etc/mail/spamassassin/v320.pre.

This one is what allows the shortcircuit rules to work: loadplugin Mail::SpamAssassin::Plugin::Shortcircuit

...You also have to create shortcircuit.cf; see below.

This one is an optimization to compile rules to native code: loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody

shortcircuit.cf
Some basic rules for the Shortcircuit plugin come with SpamAssassin. These rules can be extended by using the sample Shortcircuiting Ruleset in the SA wiki.

spamc.conf
I feel it's a good idea to avoid scanning extremely large messages. Yes, this gives spammers a back door, but scanning incoming email shouldn't be something that cripples the server. If I had a faster box with more RAM, I would set this limit much higher.

-s 600000
 * 1) max message size for scanning = 600k

local.cf
I want suspected spam to be delivered to users as regular messages, not as attachments to a Spamassassin report: report_safe 0

If a message matches the whitelists, just deliver it without doing a full scan: shortcircuit USER_IN_WHITELIST      on shortcircuit USER_IN_DEF_WHITELIST   on shortcircuit USER_IN_ALL_SPAM_TO     on shortcircuit SUBJECT_IN_WHITELIST    on

Likewise, if a message matches the blacklists, just call it spam: shortcircuit USER_IN_BLACKLIST      on shortcircuit USER_IN_BLACKLIST_TO    on shortcircuit SUBJECT_IN_BLACKLIST    on

I've never seen BAYES_00 or BAYES_99 mail that was misclassified, so avoid a full scan on that as well: shortcircuit BAYES_99               spam shortcircuit BAYES_00               ham

My users get to have their own ~/.spamassassin/user_prefs files: allow_user_rules 1

My users probably aren't sending out spam to other users on my system: score NO_RELAYS 0 -5 0 -5
 * 1) probably not spam if it originates here (default score 0)

Custom rule: among my users (mainly me), I believe a message with a  header is slightly less likely to be spam: header FROM_MAILING_LIST       exists:List-Id score  FROM_MAILING_LIST       -0.1

Custom rule: a message purporting to be from a mailing list run by my former employer is much less likely to be spam: header FOURTHOUGHT_LIST        List-Id =~ /<[^.]+\.[^.]+\.fourthought\.com>/ score  FOURTHOUGHT_LIST        -5.0

Custom rule: a message from an IP resolving to anything.ebay.com can be whitelisted: whitelist_from_rcvd *.ebay.com ebay.com
 * 1) maybe not ideal, but at one point I missed some legit eBay mail

I realize these custom rules could easily let spam through, but I was desperate to avoid false positives, which I was getting when using the AWL (Auto-WhiteList plugin), which despite copious training was making a lot of ham score as spam. AWL is no longer enabled in SpamAssassin by default, and I sure as hell am not using it ever again. So I probably don't need these rules anymore. I leave them in, though, because they remind me how to set up this kind of thing.

Before I enabled a caching, non-forwarding DNS server, the URIBL rules weren't working, so I had to disable the lookups by setting the URIBL scores to zero. Since I set up the non-forwarding DNS server, my URIBL queries are coming from my own IP rather than my ISP's DNS servers, so it works properly. Therefore, I've got this commented out now; it's just here for future reference:
 * 1) score URIBL_BLACK 0
 * 2) score URIBL_RED 0
 * 3) score URIBL_GREY 0
 * 4) score URIBL_BLOCKED 0

Bounces generated by my own MTA for mail that originates on my network will get scored lower (i.e., more likely to be ham) due to the NO_RELAYS rule. Without additional configuration, though, any bounces generated by remote MTAs, regardless of whether it's for mail originating on my network or originating elsewhere, will not be recognized or handled differently than any other inbound mail. Remotely generated bounces for mail originating elsewhere is called backscatter and is not actually spam, although it often does contain spam or viruses, and is generally unwanted.

In order to distinguish bounces from regular mail, and to distinguish the bounces for mail originating here from backscatter (not really score it differently, by default), I need to activate the VBounce plugin. This plugin is already enabled in v320.pre, but it doesn't actually do anything until it is told what the valid relays are for local outbound mail. So here I tell it what to look for in the Received headers to know that it's a bounce for mail that originated from my network: whitelist_bounce_relays chilled.skew.org

Bounces should then hit the ANY_BOUNCE_MESSAGE rule plus one of these:
 * BOUNCE_MESSAGE = MTA bounce message
 * CHALLENGE_RESPONSE = Challenge-Response message for mail you sent
 * CRBOUNCE_MESSAGE = Challenge-Response bounce message
 * VBOUNCE_MESSAGE = Virus-scanner bounce message

You can customize your scoring for these if you want, or in your .procmailrc you can specially handle scanned mail with these tags appearing in the  header. However, I thought I shouldn't be sending obvious bounces to Spamassassin at all...hmm.

Personal user_prefs
After saving and separating my ham and spam for a couple of months, then looking at the scores, I'm pretty confident that ham addressed to me is very unlikely to score much higher than 3, so I lowered the spam threshold from 5 to 4: require_hits 4

Similarly, I'm finding ham addressed to me is very unlikely to be in the BAYES_50_BODY to BAYES_99_BODY range, so I bump those scores up a bit: score BAYES_50_BODY 2.0 score BAYES_60_BODY 2.5 score BAYES_80_BODY 3.0 score BAYES_95_BODY 4.0 score BAYES_99_BODY 4.5
 * 1) defaults for the following are 0.001, 1.0, 2.0, 3.0, 3.5

I thought the default score for a Spamcop hit was pretty low, so I bumped it up: score RCVD_IN_BL_SPAMCOP_NET 3.0
 * 1) default for the following is 1.3, as of January 2014

(I already have my MTA checking Spamcop, but it only looks at the IP connecting to me, so it lets through spam that originated at a Spamcop-flagged IP but that was relayed through a non-flagged intermediary.)

Remember the down-scoring I do for mailing lists in the site config? Well, if that mailing list traffic is addressed to me, I want to score it even lower: score  FOURTHOUGHT_LIST        -100.0 score  FROM_MAILING_LIST       -1.0

I also have a bunch of  entries for my personal contacts.

Finally, I want a Spamassassin report added to the headers of every message I get, so I know why it scored as it did: add_header all Report _REPORT_

Git
I already have git installed on a different host, so this is more just my notes on how to use it.

Initial setup
This creates ~/.gitconfig and populates it with reasonable defaults (but set user.name and user.email to real values; I made mine match what I use on GitHub, for consistency): git config --global user.name "yourname" git config --global user.email "youremail" git config --global core.excludesfile ~/.gitignore git config --global core.autocrlf input git config --global core.safecrlf true git config --global push.default simple git config --global branch.autosetuprebase always git config --global color.ui true git config --global color.status auto git config --global color.branch auto

Create a ~/.gitignore and tell it what file globs to ignore (so they won't be treated as part of your project):
 * 1) ignore files ending with .old, .orig, or ~
 * .old
 * .orig

Create a place for your repos:

Use a separate SSH keypair for GitHub
You don't have to use your main SSH identity for GitHub. Host github.com IdentityFile ~/.ssh/id_dsa_github You should get a message that you've successfully authenticated.
 * Generate a new keypair:
 * When prompted for a file in which to save the key, make it create a new file:
 * Set a passphrase when prompted.
 * Copy-paste the content of  into the SSH keys section of your settings on GitHub.
 * In your, add this:
 * See if it works:

Customizations
Here are some of my favorite customizations.

/etc/ssh/sshd_config
These affect the behavior of the SSH server.
 * - Change the listening port from 22 to something else! Eliminates brute-force attacks.
 * - Enable public access to reverse tunnels.
 * - Every 30 seconds, check for client response.
 * - Don't disconnect an unresponsive client until 99999 checks fail.

~/ssh/config
These are settings to use when connecting with the ssh client to remote hosts (replace ###### as appropriate): CheckHostIP yes Compression yes Host my.otherhost.com Port ##### Host github.com IdentityFile ~/.ssh/id_dsa_github

/etc/sysctl.conf
These are changes to default kernel settings in multi-user mode.
 * - Probably no longer necessary if using the sshd_config customizations above, but just in case, every 9 minutes (instead of every 2 hours), send something to every TCP client, so crappy routers between us and them don't think we've disconnected. I used this because I found that some routers had a 10-minute connection timeout, which kept killing my SSH sessions and tunnels.

/etc/make.conf
These are extra environment variables enabled during 'make' runs, and usually are specially checked-for by the Makefiles in the FreeBSD ports.
 * - My system is is a console-based server only, no X11 libraries.
 * - My system is is a console-based server only, no X11 libraries.
 * - No encryption restrictions, please.
 * - No encryption restrictions, please.
 * - Don't waste time on tests when building ImageMagick.
 * - When building FreeType, enable subpixel rendering capability (disabled by default due to patent crap).

/etc/syslog.conf
Anything going to /dev/console should also go to a regular file: console.*                  /var/log/console.log

If logged in, some users get important messages in their ttys: !-sm-mta !sm-mta !*
 * .notice                   root,mike
 * .warning                  root,mike