|
|
Line 1: |
Line 1: |
− | This is just a compilation of random notes relating to FreeBSD system administration, mainly for my own benefit. It is focused on FreeBSD 8. Any questions/comments, email me directly at [mailto:root%40skew.org?subject=your+FreeBSD+notes root (at) skew.org]. | + | This is a compilation of random notes relating to FreeBSD system administration, mainly for my own benefit. Any questions/comments, email me directly at [mailto:root%40skew.org?subject=your+FreeBSD+notes root (at) skew.org]. |
| | | |
− | See also:
| + | ==Current notes== |
− | * My [[User:Mjb/FreeBSD on BeagleBone Black|FreeBSD on BeagleBone Black notes]]
| |
− | * My [[User:Mjb/FreeBSD on VirtualBox|FreeBSD on VirtualBox notes]]
| |
− | * My [[User:Mjb/Unbound on FreeBSD 10|Unbound on FreeBSD 10 notes]]
| |
| | | |
| + | These should be pretty up-to-date. |
| | | |
− | ==OS installation & upgrade==
| + | * My [[User:Mjb/FreeBSD on BeagleBone Black|FreeBSD on BeagleBone Black notes]] cover the installation, configuration, and upgrade of FreeBSD on the BeagleBone Black hardware. |
| + | * My [[User:Mjb/FreeBSD on BeagleBone Black|FreeBSD on BeagleBone Black additional software notes]] cover the installation and configuration of the major servers and apps I use. |
| | | |
− | ===How much disk space to allocate?=== | + | ==Older notes== |
| | | |
− | You need roughly 15 GB for the OS, ports, and updates (buildworld etc.).
| + | * My [[User:Mjb/FreeBSD on VirtualBox|FreeBSD on VirtualBox notes]] re: my attempt to run FreeBSD in a virtual machine on my desktop PC |
| + | * My [[User:Mjb/Unbound on FreeBSD 10|Unbound on FreeBSD 10 notes]] covering the configuration of Unbound (DNS resolver) |
| + | * My [[User:Mjb/FreeBSD 8 installation and upgrade|FreeBSD 8 installation and upgrade notes]] |
| + | * My [[User:Mjb/FreeBSD 8 ports management|FreeBSD 8 ports management notes]] mainly about the use of <code>portmaster</code> |
| + | * My [[User:Mjb/FreeBSD 8 additional software|FreeBSD 8 additional software notes]] about installing & configuring various services |
| + | * My [[User:Mjb/FreeBSD customizations|FreeBSD customizations]] |
| | | |
− | You also need space for "userland" (home directories, websites, databases) as well as mail and temporary files; this totals another 5 GB on my modest system with few users.
| + | ==Welcome, FreeBSD newbies== |
| | | |
− | You also need swap space, the ideal amount of which depends on how much physical RAM you have and how much RAM your system will ever need at one time; "twice the amount of physical RAM" used to be the recommendation, and 4 GB seems to be a fairly standard amount these days, but I find that 500 MB is plenty. Ideally, swap space should be a partition, not a file.
| + | Linux user? The BSDs (FreeBSD, OpenBSD, NetBSD, and others) are not "Linux distributions", although from a user's perspective, they act very much the same. |
| | | |
− | It [https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/bsdinstall-partitioning.html#configtuning-initial used to be recommended] to have separate partitions and slices (sub-partitions) for certain directories, but in my experience, it is perfectly fine to have one partition for swap and one for everything else. Consider using a separate drive for userland, so that you can recover more easily in case the OS drive dies.
| + | The various BSD and GNU/Linux operating systems began life in the early 1990s as open-source imitations of AT&T's multi-user operating system called Unix System V. These Unix-like OSes have all grown, forked, and improved substantially over the years, and they are all still very similar to one another, but there are differences you should be aware of if you are a longtime Linux user: |
| | | |
− | Nevertheless, here's what I did when installing FreeBSD 8 with only one drive and one regular user:
| + | * On FreeBSD, at least at first, you will mainly be interacting with the console: the text-based interface with command-line prompts, rather than a graphical user interface (GUI) with a desktop, icons, and windows. On FreeBSD, these GUIs are optional add-ons that you can mess with after first getting up and running ''without'' a GUI. (This may change someday, but it's low priority in the BSD world). |
− | * <code>/</code> = 500 MB (actual use is ~340 MB) – newer releases require more than 512 MB to hold old+new kernel | + | * BSD tends to come fairly bare-bones, without a huge suite of common apps preinstalled. |
− | * <code>/tmp</code> = 500 MB (actual use is near zero for me)
| + | * When you want to add software or upgrade the system, you have a choice between installing prebuilt packages with the tool <code>pkg</code>, or (this is what most people do) building from source code via the "ports collection" and a tool called <code>portmaster</code>. There are pros and cons to this approach, with one of the cons being that installing software takes much longer than you are probably used to. (Technically, you do have the option of using poudriere or [https://www.freshports.org/ports-mgmt/synth synth] to pre-build packages on a faster host, or using someone's private package repository, but for simplicity and security, these notes don't yet broach those topics.) |
− | * <code>/var</code> = 1.5 GB (actual use is ~790 MB for me) | + | * FreeBSD has no <code>sudo</code> command (though you can install it if you really miss it). Normally, if you want to do something as a superuser, just switch your identity to root with <code>su -m</code> and run your commands from the new shell. |
− | * <code>/usr</code> = the rest (68 GB in my case, actual use is under 20 GB)
| + | * Some features of the Linux version of the POSIX shell are actually "bashisms" which are unavailable in FreeBSD's /bin/sh. |
− | * 1 GB of swap space on a separate partition. | + | * The default user shell is tcsh, not bash. If you want bash, you have to install it. |
| | | |
− | ===See what version of the OS is actually running===
| |
− | The standard <code>uname -a</code> method actually doesn't work, because it just shows you what the OS/version/branch/patch level was when the kernel was compiled; so basically it is the kernel version. The current userland version info must be obtained some other way.
| |
| | | |
− | In FreeBSD 10 and up, there's a tool for this:
| + | [[Category:FreeBSD]] |
− | * <code>freebsd-version</code>
| |
− | | |
− | By default, it reports the userland version at the time the tool was built, which should be correct, in most cases.
| |
− | | |
− | Otherwise, if your OS source code (<code>/usr/src</code>) is current, then this should work:
| |
− | * <code>grep -v # /usr/src/sys/conf/newvers.sh | head -4</code>
| |
− | Example output:
| |
− | <pre>
| |
− | | |
− | TYPE="FreeBSD"
| |
− | REVISION="8.3"
| |
− | BRANCH="RELEASE-p7"</pre>
| |
− | | |
− | ===Upgrade to a new patch level===
| |
− | The patch level ("<code>-p7</code>" in the example above) correlates with security patches that were released as replacement binaries for the OS.
| |
− | | |
− | Of course it's possible you applied patches and rebuilt some binaries yourself, according to instructions in the security advisories you get by email (you did sign up for them, right?) ... in which case the patch level is not really accurate.
| |
− | | |
− | Regardless, these binary patches are only available for OS versions that were distributed as binaries, and that are still "supported", i.e. not more than 2 years old. I think this means pretty much just the latest <code>-RELEASE</code> branches. (<code>-STABLE</code> isn't distributed in binary form and they don't worry about security at all for <code>-CURRENT</code>.) Therefore, you may first have to do a minor version update (see next section) or new patches won't even be available for your system.
| |
− | | |
− | First, get the patches (maybe unset the <code>GZIP</code> environment variable first to reduce clutter):
| |
− | | |
− | * <code>freebsd-update fetch</code>
| |
− | | |
− | It'll download them to a temporary location and tell you what will be changed. If you have the OS source code installed in <code>/usr/src</code>, source patches will be included in the update as well.
| |
− | | |
− | Now, install them:
| |
− | | |
− | * <code>freebsd-update install</code>
| |
− | | |
− | Whether a reboot is needed depends on what was updated. You have to decide that yourself. Obviously anything kernel-related should make you want to do a reboot. If you don't do a reboot, but system daemons were updated, you'll need to restart those.
| |
− | | |
− | If you previously recompiled any of your system binaries with custom options, such as '''sendmail''' in order to enable SMTP Authentication (see below), and freebsd-update replaced those binaries, then you will have to recompile them! Otherwise, you will suddenly be running the standard version. I use a script I call <code>rebuild_sendmail</code> so that I don't have to look it up every time:
| |
− | | |
− | <pre>#!/bin/sh
| |
− | cd /usr/src/lib/libsmutil
| |
− | make cleandir && make obj && make
| |
− | cd /usr/src/lib/libsm
| |
− | make cleandir && make obj && make
| |
− | cd /usr/src/usr.sbin/sendmail
| |
− | make cleandir && make obj && make && make install
| |
− | cd /etc/mail
| |
− | make restart</pre>
| |
− | | |
− | This assumes the source code is being kept up-to-date. /etc/freebsd-update.conf should include src: <code>Components src world kernel</code>
| |
− | | |
− | ===Upgrade to a new minor version of the OS===
| |
− | Reference: [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/updating-upgrading-freebsdupdate.html FreeBSD Update section of the FreeBSD Handbook]
| |
− | | |
− | The following info is based on my upgrade from 8.1-RELEASE to 8.3-RELEASE, and from 8.3-RELEASE to 8.4-RELEASE (assumes generic kernel):
| |
− | | |
− | ====Prepare the environment====
| |
− | I normally have "-v" in my GZIP environment variable, and this really clutters the output of <code>freebsd-update</code>, so unset it:
| |
− | * <code>unsetenv GZIP</code>
| |
− | | |
− | ====Get new files====
| |
− | * <code>freebsd-update -r 8.3-RELEASE upgrade</code>
| |
− | | |
− | Takes several hours.
| |
− | | |
− | ====Merge files====
| |
− | Most merges will happen automatically, but some un-mergeable files like /etc/passwd will be reported, and you need to answer 'y' and merge them manually...but you don't get a nice merge interface, you just get dumped into an empty text editor! What you are expected to do here is create a merged file. Be very careful!
| |
− | | |
− | The goal is to compare and merge each old file from the directory tree rooted at <code>/var/db/freebsd-update/merge/old</code> (copied from the live system) with the corresponding new file in <code>/var/db/freebsd-update/merge/</code><var>XXX</var>, where <var>XXX</var> is the new FreeBSD version you're upgrading to (e.g. <code>8.4-RELEASE</code>). You need to put each merged file into the same relative location under <code>/var/db/freebsd-update/merge/new</code>, which is where the empty text editor will be saving to.
| |
− | | |
− | In my upgrade to 8.3-RELEASE, I just elected to go into the editor (you have no choice, really), loaded the old file, and saved it as-is. I didn't bother merging in the new one! Not ideal, but the least amount of hassle, right?
| |
− | | |
− | In my upgrade to 8.4-RELEASE, I tried a new approach: merge the files in a separate window, pre-populating the <code>new</code> folder, so that when the editor is opened, it's not empty, but rather has the merged file in it. Then I can just give it a once-over and save the result.
| |
− | | |
− | To accomplish this, in a separate terminal, as root, it would be nice to be able to run mergemaster. So I tried to do it like this:
| |
− | | |
− | * <code>mergemaster -w 100 -ciFv -m /var/db/freebsd-update/merge/8.4-RELEASE -D /var/db/freebsd-update/merge/new</code>
| |
− | | |
− | However, it didn't work. I have [http://lists.freebsd.org/pipermail/freebsd-questions/2013-June/251781.html asked about it] on the freebsd-questions mailing list. Here is another, cruder method I tried, which did work:
| |
− | | |
− | * <code>cd /var/db/freebsd-update/merge/8.4-RELEASE</code>
| |
− | * <code>find -X . -type f | xargs -n 1 -o -I % sh -c '{ echo Now processing %. left=current, right=new, help="?"; sdiff -d -w 100 -o ../new/% ../old/% %; }'</code>
| |
− | | |
− | The downside of this method is that it assumes you want to do an interactive merge (sdiff) of every file, whereas sometimes you are really going to want to save time and just choose to use the old or new file without merging; mergemaster would give you that ability.
| |
− | | |
− | Regardless of how you do your merge, once you've saved all the files in the editor, you'll be prompted to approve a diff for each one. If you answer "n" to any of these prompts, it will abort the entire upgrade and you will have to start over! So hopefully the merges are all OK, and you can continue.
| |
− | | |
− | However, among the changes you're asked to approve may be unspecified differences in <code>/etc/pwd.db</code> and <code>/etc/spwd.db</code>, the binary files that contain your password database. You have no choice but to answer "y", but '''for God's sake, rebuild those files before rebooting!''' (see below).
| |
− | | |
− | ====Review changes====
| |
− | freebsd-update now presents you with lists of all the files that will be deleted, all the files that will be added, and all the files that will be modified.
| |
− | | |
− | Pay special attention to the changes in /etc.
| |
− | | |
− | After showing you the lists, that's it, nothing happens. The changes are ready to be made, but nothing has actually happened yet.
| |
− | | |
− | ====Install the new files====
| |
− | | |
− | You are about to overwrite your real system files. I suggest making a backup of /etc first:
| |
− | * <code>cp -pr /etc /tmp/etc.backup</code>
| |
− | | |
− | Cross your fingers:
| |
− | * <code>freebsd-update install</code>
| |
− | | |
− | ====Rebuild soon-to-be-clobbered databases====
| |
− | Now, unless you got mergemaster to work, you probably have to do the things that mergemaster normally would do for you.
| |
− | | |
− | ''It seems things don't get replaced until after reboot. This may be a real problem!''
| |
− | | |
− | If <code>/etc/passwd</code> or <code>/etc/master.passwd</code> were changed or if <code>/etc/pwd.db</code> or (most importantly, I think) <code>/etc/spwd.db</code> changed (e.g., as in 8.4-RELEASE, got set to new defaults), then a <code>pwd_mkdb</code> run will be necessary to regenerate the .db files, and you want to do this before your shutdown or you'll never get to log back in.
| |
− | | |
− | Normally you would do this:
| |
− | * <code>pwd_mkdb -p /etc/master.passwd</code>
| |
− | This will use <code>/etc/master.passwd</code> as the source file, and the <code>-p</code> means generate a new <code>/etc/passwd</code> from it, in addition the the .db files.
| |
− | | |
− | However, the files in <code>/etc</code> are, at this stage, untouched. The new versions are sitting gzipped in <code>/var/db/freebsd-update/files</var>, a huge dumping ground with no sub-structure. An index to the files is in <code>/var/db/freebsd-update/<var>install.XXXXX</var>/INDEX-NEW</code>, where <var>XXXXX</var> is a random ID; look at the directory creation date to figure out which one is current, if there's more than one.
| |
− | | |
− | So I think what you need to do is something like this, to inspect the new files:
| |
− | * <code>cd /var/db/freebsd-update</code>
| |
− | * <code>mkdir -m 0700 /tmp/oldpwdfiles</code>
| |
− | * <code>zcat files/`grep '^/etc/master\.passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/master.passwd</code>
| |
− | * <code>zcat files/`grep '^/etc/passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/passwd</code>
| |
− | * <code>zcat files/`grep '^/etc/pwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/pwd.db</code>
| |
− | * <code>zcat files/`grep '^/etc/spwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz > /tmp/oldpwdfiles/spwd.db</code>
| |
− | * <code>ls -l /tmp/oldpwdfiles</code>
| |
− | | |
− | <pre>total 10
| |
− | 6 -rw-r--r-- 1 root wheel 4.0k Jun 25 00:48 master.passwd
| |
− | 4 -rw-r--r-- 1 root wheel 3.2k Jun 25 00:49 passwd
| |
− | 0 -rw-r--r-- 1 root wheel 0B Jun 25 00:49 pwd.db
| |
− | 0 -rw-r--r-- 1 root wheel 0B Jun 25 00:49 spwd.db
| |
− | </pre>
| |
− | | |
− | Obviously pwd.db and spwd.db are crap and we'd be in trouble if we installed those empty files!
| |
− | | |
− | If <code>/tmp/oldpwdfiles/master.passwd</code> looks OK, then try generating a new passwd file and pair of .db files:
| |
− | * <code>mkdir -m 0700 /tmp/newpwdfiles</code>
| |
− | * <code>pwd_mkdb -d /tmp/newpwdfiles -p /tmp/oldpwdfiles/master.passwd</code>
| |
− | * <code>ls -l /tmp/newpwdfiles</code>
| |
− | | |
− | <pre>total 138
| |
− | 6 -rw------- 1 root wheel 4.0k Jun 25 00:48 master.passwd
| |
− | 4 -rw-r--r-- 1 root wheel 3.2k Jun 25 00:53 passwd
| |
− | 68 -rw-r--r-- 1 root wheel 68k Jun 25 00:53 pwd.db
| |
− | 60 -rw------- 1 root wheel 60k Jun 25 00:53 spwd.db
| |
− | </pre>
| |
− | | |
− | Quite a bit better. As you can see, master.passwd was just moved over, and the other three files were generated. Now to replace them:
| |
− | * <code>gzip /tmp/newpwdfiles/*</code>
| |
− | * <code>mv /tmp/newpwdfiles/master.passwd.gz files/`grep '^/etc/master\.passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz</code>
| |
− | * <code>mv /tmp/newpwdfiles/passwd.gz files/`grep '^/etc/passwd' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz</code>
| |
− | * <code>mv /tmp/newpwdfiles/pwd.db.gz files/`grep '^/etc/pwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz</code>
| |
− | * <code>mv /tmp/newpwdfiles/spwd.db.gz files/`grep '^/etc/spwd\.db' install.LYQAJQ/INDEX-NEW | cut -d \| -f 7`.gz</code>
| |
− | | |
− | And finally, clean up:
| |
− | * <code>rm -fr /tmp/oldpwdfiles /tmp/newpwdfiles</code>
| |
− | | |
− | You'll have to go through a similar process if you use sendmail and you merged in any changes to <code>/etc/mail/aliases</code> or <code>/etc/mail/*.cf</code> files. Ordinarily, the most thorough way is this:
| |
− | * <code>cd /etc/mail; make all</code>
| |
− | * <code>make install</code>
| |
− | * <code>make restart</code>
| |
− | | |
− | But as before, the files haven't been installed yet!
| |
− | | |
− | Likewise, changes to <code>/etc/login.conf</code> require rebuilding a database:
| |
− | * <code>cap_mkdb</code> (see the man page for exact syntax)
| |
− | | |
− | Same for <code>/etc/services</code>:
| |
− | * <code>services_mkdb</code> (see the man page for exact syntax)
| |
− | | |
− | There's a bug filed about this, but only for the master.passwd; it doesn't take into account this latest development where .db files are clobbered: http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/165954
| |
− | | |
− | ====Reboot and continue====
| |
− | OK, now reboot to try out the new kernel:
| |
− | * <code>shutdown -r now</code> (again, this assumes you want the generic kernel)
| |
− | | |
− | Hope & pray it comes back up. If it does, do this again to get world installed:
| |
− | | |
− | * <code>freebsd-update install</code>
| |
− | | |
− | This worked for me, for the upgrade to 8.3-RELEASE.
| |
− | | |
− | For the 8.4 upgrade, after this stage, it said:
| |
− | <pre>Completing this upgrade requires removing old shared object files.
| |
− | Please rebuild all installed 3rd party software (e.g., programs
| |
− | installed from the ports tree) and then run "/usr/sbin/freebsd-update install"
| |
− | again to finish installing updates.</pre>
| |
− | | |
− | Worry about that in a minute. First, realize that at this point, <code>/etc</code> has been modified, so it's a good idea to make sure you like the look of the new files, especially these:
| |
− | * <code>/etc/master.passwd</code>
| |
− | * <code>/etc/group</code>
| |
− | * <code>/etc/mail/*</code> (if changed, ''you'' need to run the appropriate <code>make</code> command in <code>/etc/mail</code> ...perhaps <code>make all install restart</code>)
| |
− | * <code>/etc/services</code> (if changed, ''you'' need to run <code>services_mkdb -q</code> to rebuild <code>/var/db/services.db</code>)
| |
− | * <code>/etc/login.conf</code> (if changed, freebsd-update should've run <code>cap_mkdb</code> to rebuild <code>login.conf.db</code>)
| |
− | | |
− | If anything's amiss, remember you made a backup in <code>/tmp/etc</code>.
| |
− | | |
− | OK, now you can follow the directions below to update your ports tree and rebuild everything(!). Personally I don't like doing this because things tend to go wrong if you don't do it piecemeal. The downside is that some things will be left un-updated. But you can deal with that; read on...
| |
− | | |
− | ====Check for cruft====
| |
− | After the upgrade, you might want to see if anything out-of-date got left behind:
| |
− | * <code>cd /usr/src && make check-old</code>
| |
− | If there's anything, you can run <code>make delete-old</code> to get rid of it; it will ask you about each file, normally. Ref: http://www.freebsd.org/doc/handbook/make-delete-old.html
| |
− | | |
− | There are a couple of options for checking the installed shared libraries:
| |
− | * If you install the <code>sysutils/bsdadminscripts</code> port, you can run <code>pkg_libchk</code> to check for missing libraries. It even tells you which ports are affected.
| |
− | * If you install the <code>sysutils/libchk</code> port (note: requires Ruby), you can run <code>libchk</code> to check for missing libraries, check for unused libraries, and see exactly which binaries use each library. To figure out which port installed the file needing the library, you need to run <code>pkg info -W <var>/path/to/the/file</var></code>.
| |
− | | |
− | Sample output of <code>pkg_libchk</code>:
| |
− | <pre>gamin-0.1.10_4: /usr/local/libexec/gam_server misses libpcre.so.0
| |
− | gio-fam-backend-2.28.8_1: /usr/local/lib/gio/modules/libgiofam.so misses libpcre.so.1</pre>
| |
− | | |
− | Rebuilding these two ports should be sufficient to get them linked to the current libpcre library. (Double-checking <code>/usr/local/lib</code> shows that there's a <code>libpcre.so.3</code> now).
| |
− | | |
− | Why did I have these ports installed? <code>pkg info -R gamin-0.1.10_4</code> tells me gamin is required by gio-fam-backend, and <code>pkg info -R gio-fam-backend-2.28.8_1</code> reveals that gio-fam-backend isn't required by anything that I currently have installed. This is a weird port, though, and it is not something you want to deinstall. It is FreeBSD-specific, and is kind of a companion to the glib port. (Though apparently they decommissioned it - see the 20130731 entry in UPDATING). <code>pkg info -R glib-2.34.3</code> reveals what's using glib: ImageMagick & MediaWiki.
| |
− | | |
− | Anyway, <code>portmaster --update-if-newer gio-fam gamin</code> takes care of the problem. Now when I run <code>pkg_libchk gamin-0.1.10_5</code> and <code>pkg_libchk gio-fam-backend-2.34.3</code> (the new versions), there are no problems. The question now is whether I need to update ImageMagick. The lack of problems reported by <code>pkg_libchk ImageMagick-nox11-6.8.0.7_1</code> suggests the answer is no.
| |
− | | |
− | ====Reboot to restart daemons====
| |
− | After upgrading from 8.3-RELEASE to 8.4-RELEASE, <code>/var/log/messages</code> started accumulating error messages from sshd, every time someone tried to log in:
| |
− | <pre>error: Could not load host key: /etc/ssh/ssh_host_ecdsa_key</pre>
| |
− | | |
− | Indeed, that key file didn't exist until after ''another'' reboot, which didn't happen until a mysterious, probably unrelated crash a month after the upgrade.
| |
− | | |
− | Web searches suggest that most people running into this problem aren't able to log in ''at all'' until they run a special <code>ssh_keygen</code> command to create the missing files, but I was having no such trouble.
| |
− | | |
− | I think that for me, the only problem was that after finishing the OS upgrade, sshd needed to actually be restarted. This makes me think that maybe it's a good idea to restart all the daemons as the penultimate step in upgrading the OS. To do that, you could run <code>service -R</code>, but it might be easier to just reboot.
| |
− | | |
− | ==Ports installation & upgrade==
| |
− | Here's some general info about this topic.
| |
− | | |
− | ===Get a quick list of installed ports===
| |
− | * with pkgng, <code>pkg info -aoq | sort</code>
| |
− | * without pkgng, <code>pkg_info -aoq | sort</code>
| |
− | | |
− | ===Portmaster flags===
| |
− | Some of the most useful flags for portmaster:
| |
− | * <code>-d</code> will make it delete old distfiles after each port is installed, rather than asking you about it. (<code>-D</code> would make it keep them.)
| |
− | * <code>-b</code> will make it keep the backup it made when installing the previous version of a port. It usually deletes the backup after successfully installing a new version.
| |
− | * <code>-x ''pattern''</code> will make it exclude ports (including dependencies) that match the glob pattern.
| |
− | * <code>--update-if-newer</code> will prevent rebuilding/reinstalling ports that don't need it. But for some reason, you have to specify more than one port on the command-line for this to work.
| |
− | * <code>-n</code> depends on what else you are doing. Usually it means do a dry run. But in conjunction with <code>-e <var>pkgdbfolder</var></code>, <code>-s</code>, <code>--clean-distfiles</code>, <code>--clean-packages</code>, <code>--check-depends</code>, or <code>--check-port-dbdir</code>, it means "answer no to all questions."
| |
− | * <code>--packages</code> will make it use a package (major timesaver) for both the port ''if'' the latest package isn't older than the version in the ports collection. Otherwise, it falls back on building the port.
| |
− | * <code>--build-packages</code> will make it try to use packages for build dependencies... I haven't figured this one out yet. It seems to not be necessary?
| |
− | | |
− | Here's an example (to update Perl modules, and Perl if needed):
| |
− | * <code>portmaster -d --update-if-newer --packages p5-</code>
| |
− | | |
− | ===Environment prep===
| |
− | If you have set your BZIP2 environment variable to include <code>-v</code>, like I have, and you have portaudit installed, then you will probably find that every time you do anything with ports or packages, you get a bunch of useless lines that say <code>/var/db/portaudit/auditfile.tbz: done</code>, and FreeBSD's <code>/usr/ports/Mk/bsd.port.mk</code> misinterprets this as problems needing to be fixed.
| |
− | * <code>unsetenv BZIP2</code>
| |
− | [http://lists.freebsd.org/pipermail/freebsd-ports/2013-April/082845.html I reported this bug] to the freebsd-ports mailing list, but I doubt it will get fixed unless I submit a patch, myself.
| |
− | | |
− | (Looks like this was eventually fixed as part of pkgng integration.)
| |
− | | |
− | ===Update portmaster===
| |
− | Probably a good idea before doing anything else with portmaster.
| |
− | * <code>portmaster --packages portmaster</code>
| |
− | | |
− | Since <code>--update-if-newer</code> needs multiple packages to be specified, we can't use it here. Thus, if there's nothing to update, you will end up reinstalling the same version you already had.
| |
− | | |
− | ===Check integrity of existing ports===
| |
− | * <code>portmaster --check-depends</code>
| |
− | | |
− | ===Delete cached options from previous builds of stale ports===
| |
− | This just does some cleanup of /var/db/ports, which is where the options you chose in the 'make config' step of port building are stored. The options for ports that are currently properly installed will be left alone.
| |
− | * <code>portmaster --check-port-dbdir</code>
| |
− | | |
− | ===Update ports collection===
| |
− | The ports collection is a folder tree containing Makefiles and patches for 3rd-party software. Anytime you want to add or update 3rd-party software, first make sure the ports collection is up-to-date. Reference: [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/updating-upgrading-portsnap.html]
| |
− | | |
− | First time using portsnap or just want a fresh tree? Download the current ports tree to a temporary location (fetch), then install it in /usr/ports, replacing whatever was there before (extract):
| |
− | * <code>portsnap fetch extract</code>
| |
− | | |
− | Not the first time? Download updates to a temporary location (fetch), then apply them to the existing ports tree (update), deleting any modified or added files:
| |
− | * <code>portsnap fetch update</code>
| |
− | | |
− | Now go look at <code>/usr/ports/UPDATING</code>.
| |
− | | |
− | ===See what packages need updating===
| |
− | * <code>pkg audit</code> will tell you which installed packages have security vulnerabilities.
| |
− | * <code>pkg version -v -l "<"</code> will tell you what installed packages could be upgraded from the packages collection. It's fast.
| |
− | * <code>pkg version -P -v -l "<"</code> will tell you what installed packages could be upgraded from the ports collection. It's slow.
| |
− | | |
− | The upgrade info is based on the info in /usr/ports, not by seeing what's new online.
| |
− | | |
− | Some ports will just have a portrevision bump due to changes in the port's Makefile. These are usually unimportant and not worth the pain of rebuilding and reinstalling.
| |
− | | |
− | ===Dealing with port upgrade problems===
| |
− | | |
− | ====A port has moved====
| |
− | The Handbook doesn't cover this, but sometimes the ports collection folder for a port that you've installed will get moved.
| |
− | | |
− | These moves are listed in <code>/usr/ports/MOVED</code>, which is read by portmaster. So, although you could look at that file beforehand, you probably won't find out about a move until you run <code>portmaster --check-depends</code>, or when you try to update your installed port.
| |
− | | |
− | For example, there was once a www/mediawiki meta-port, which pointed to the actual port for the latest stable version. I had used it to install mediawiki119. When I went to update it with <code>portmaster www/mediawiki</code>, I got the following error:
| |
− | <pre>
| |
− | ===>>> The www/mediawiki port moved to www/mediawiki119
| |
− | ===>>> Reason: Rename mediawiki to mediawiki119
| |
− | </pre>
| |
− | | |
− | The first place to look when you see this message is <code>/usr/ports/UPDATING</code>. Often, there will be a note about it there, with instructions. In this case, though, there wasn't, so I [http://lists.freebsd.org/pipermail/freebsd-ports/2013-May/083943.html asked about it on freebsd-ports] and also [http://lists.freebsd.org/pipermail/freebsd-doc/2013-May/022060.html on freebsd-doc]. I was told that UPDATING will only have unusual things in it, and this particular situation didn't qualify, because the version hadn't actually changed.
| |
− | | |
− | I don't think there's a way to just update the list of installed packages so that it will know about the move. You have to want to update the port, and then use portmaster's <code>-o</code> flag to say which new port you want to replace the old one with.
| |
− | | |
− | So, for an ordinary move, the answer is:
| |
− | * <code>portmaster -o <var>NEWPORT</var> <var>INSTALLEDPORT</var></code>
| |
− | | |
− | For example, I could have updated without changing the version:
| |
− | * <code>portmaster -o www/mediawiki119 www/mediawiki</code>
| |
− | | |
− | But since there was a newer version available, I decided to update to it:
| |
− | * <code>portmaster -o www/mediawiki120 www/mediawiki</code>
| |
− | | |
− | ====lzma library errors====
| |
− | | |
− | This probably won't come up again, but maybe it will help someone else. After updating to 8.4-RELEASE, I was trying to rebuild the PHP port (as part of the MediaWiki upgrade), but it failed early in the process with this message:
| |
− | | |
− | * <code>checking whether libxml build works... no<br/>configure: error: build test failed. Please check the config.log for details.<br/>===> Script "configure" failed unexpectedly.<br/>Please report the problem to ale@FreeBSD.org [maintainer] and attach the<br/>"/usr/ports/lang/php5/work/php-5.4.16/config.log" including the output of the<br/>failure of your make command. Also, it might be a good idea to provide an<br/>overview of all packages installed on your system (e.g. a /usr/sbin/pkg_info<br/>-Ea).<br/>*** Error code 1<br/>Stop in /usr/ports/lang/php5.</code>
| |
− | | |
− | Looking at that config.log file, I saw more detail:
| |
− | * <code>configure:21972: checking whether libxml build works<br/>configure:21999: cc -o conftest -O2 -pipe -march=pentium3 -fno-strict-aliasing -fvisibility=hidden -R/usr/local/lib -L/usr/local/lib conftest.c -lm -lxml2 -lz -liconv -lm >&5<br/>/usr/local/lib/libxml2.so: undefined reference to `lzma_code@XZ_5.0'<br/>/usr/local/lib/libxml2.so: undefined reference to `lzma_properties_decode@XZ_5.0'<br/>/usr/local/lib/libxml2.so: undefined reference to `lzma_end@XZ_5.0'<br/>/usr/local/lib/libxml2.so: undefined reference to `lzma_auto_decoder@XZ_5.0'<br/>configure:21999: $? = 1<br/>configure: program exited with status 1</code>
| |
− | | |
− | On a hunch, I decided to see what would happen if I tried to restart Apache:
| |
− | * <code>httpd: Syntax error on line 108 of /usr/local/etc/apache22/httpd.conf: Cannot load /usr/local/libexec/apache22/libphp5.so into server: /usr/local/lib/liblzma.so.5: version XZ_5.0 required by /usr/local/lib/libxml2.so.5 not defined</code>
| |
− | | |
− | When Googling for answers, I found some mention that ports needing the lzma port now need to use the xz port. Something doesn't sound right about that, though, because the xz port is deprecated as well.
| |
− | | |
− | It turns out that at some point, the xz port had been installed, needed by some other port. This resulted in some "lzma" libs being placed in <code>/usr/local/lib</code> a very long time ago. Better lzma libs later became part of the base system in <code>/usr/lib</code>. Since the old libs were still sitting in <code>/usr/local/lib</code>, they were taking precedence when other ports needed them. This eventually prevented the PHP port from building, due to its reliance on libxml2, which in turn relies on liblzma, which needs to be up-to-date.
| |
− | | |
− | Simply moving the outdated libs out of <code>/usr/local/lib</code> took care of the problem. Specifically, it was <code>/usr/local/lib/liblzma.*</code>. Really, though, the solution is to <code>pkg_delete xz-5.03</code> (or whatever version you have).
| |
− | | |
− | ====more lzma library errors====
| |
− | | |
− | While attempting to upgrade all of my installed ports on another occasion in late 2013, the graphics/gd port failed to build because libtool was looking for the nonexistent <code>/usr/local/lib/liblzma.la</code>. A [http://lists.freebsd.org/pipermail/freebsd-ports/2011-August/069125.html 2011 discussion] about it suggested the fix might be as easy as deleting and reinstalling ImageMagick:
| |
− | * <code>cd /usr/ports/graphics/ImageMagick-nox11</code>
| |
− | * <code>make deinstall clean install</code>
| |
− | | |
− | This just led to the same kind of failure when the build tried to link in ImageMagick's tiff coder. So I tried rebuilding the underlying lib first:
| |
− | * <code>cd /usr/ports/graphics/tiff</code>
| |
− | * <code>make deinstall clean install</code>
| |
− | | |
− | Then I went back to the ImageMagick build:
| |
− | * <code>cd /usr/ports/graphics/ImageMagick-nox11</code>
| |
− | * <code>make</code>
| |
− | | |
− | That got me past the tiff coder error, so I continued:
| |
− | * <code>make install</code>
| |
− | | |
− | That worked as well.
| |
− | | |
− | ImageMagick's enormous set of dependencies and lengthy build process have been problematic for me in the past. I'd rather exclude it from any port upgrades, but I'm not sure it's possible or wise to do so.
| |
− | | |
− | ==First installation of specific ports==
| |
− | ===MySQL===
| |
− | * Install the databases/mysql##-server port. This will also install the client; no need to install the client port separately.
| |
− | * Close off access to the server from outside of localhost by making sure this is in /var/db/mysql/my.cnf:
| |
− | <pre>[mysqld]
| |
− | bind-address=127.0.0.1</pre>
| |
− | Also, if you have enabled an <code>ipfw</code> firewall, you can put similar ipfw rules somewhere like <code>/etc/rc.local</code>. For example (replace <var>X</var> with your IP address in all 3 places, and make sure to actually run these commands if you're not gonna reboot):
| |
− | <pre># only allow local access to MySQL
| |
− | ipfw add 3000 allow tcp from X to X 3306
| |
− | ipfw add 3001 deny tcp from any to X 3306</pre>
| |
− | | |
− | * Make sure <code>mysql_enable="YES"</code> is in <code>/etc/rc.conf</code>, then run <code>/usr/local/etc/rc.d/mysql-server</code> start. MySQL is now running but is insecure; you need to set the root password and delete the anonymous accounts as described in the manual at http://dev.mysql.com/doc/refman/5.1/en/default-privileges.html ... however, if you're also restoring data from backups, you can skip this step since your backups hopefully include the 'mysql' database which has all the user account data in it!
| |
− | | |
− | You need to set a root password for MySQL. This is one way (where ''PWD'' is the password you want to use):
| |
− | * <code>mysqladmin -u root password ''PWD''</code>
| |
− | | |
− | Or, if you already have a backup of the 'mysql' database, such as made by my script, you can just load that backup, because the usernames and passwords are stored in there.
| |
− | | |
− | To restore from backups:
| |
− | | |
− | # Unzip the latest backup file in <code>/usr/backup/mysql/daily</code> (see above for the script that puts the backup files there).
| |
− | # Run <code>mysql < backupfile.sql</code> to load the data, including user tables & passwords.
| |
− | # Run <code>mysql_upgrade</code> to verify that the data is all OK to use with this version of MySQL.
| |
− | # <code>mysql -u root</code> should now give an error for lack of password. Time to install MediaWiki?
| |
− | | |
− | ===MediaWiki===
| |
− | The www/mediawiki port installs MediaWiki, which is small, but its dependencies result in the installation of many other ports, including ImageMagick (assuming you are supporting image uploads), Ghostscript, libxslt, docbook-xsl, and Python.
| |
− | | |
− | For ImageMagick, since I'm only using it in MediaWiki, I disable its X11 and Perl support, and formats I don't care about like FPX, JBIG, JPEG2000.
| |
− | | |
− | ===sa-utils===
| |
− | sa-utils is an undocumented port that installs the script <code>/usr/local/etc/periodic/daily/sa-utils</code>. The purpose of the script is to run sa-update and restart spamd every day, so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.
| |
− | * Install the mail/sa-utils port. when prompted, enable sa-compile support.
| |
− | * Put whatever flags sa-update needs in <code>/etc/periodic.conf</code>. For me, it's:<br><code>daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org"</code> and, after I've confirmed it's working OK, <code>daily_sa_quiet="yes"</code>.
| |
− | * Assuming you enabled sa-compile support, uncomment this line in <code>/usr/local/etc/mail/spamassassin/v320.pre</code>:<br><code>loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody</code>
| |
− | | |
− | That's it.
| |
− | | |
− | Now, if you don't want to install sa-utils, but you are running SpamAssassin, you'll want a cron job that updates SpamAssassin rules and restarts spamd every day. Here's the basic version I used to use for the core rules:
| |
− | * <code>/usr/local/bin/sa-update --nogpg --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart</code>
| |
− | | |
− | After using that for years, I switched to a version that incorporates SpamAssassin developer Justin Mason's [http://wiki.apache.org/spamassassin/SoughtRules "sought.cf" ruleset]. First, outside of crontab, add the channels' GPG keys to sa-update's keyring:
| |
− | * <code>mkdir -m 700 /usr/local/etc/mail/spamassassin/sa-update-keys/</code>
| |
− | * <code>fetch http://yerp.org/rules/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY</code>
| |
− | * <code>fetch http://spamassassin.apache.org/updates/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY</code>
| |
− | | |
− | The caveat here is that the keys will eventually expire. For example, the one for sought.rules.yerp.org expires on 2017-08-09. At that point, you'll have to notice that the updates stopped working, and get a new key. To see the keys on sa-update's keyring, you can do this:
| |
− | * <code>gpg --homedir /usr/local/etc/mail/spamassassin/sa-update-keys --list-key</code>
| |
− | | |
− | So here's what goes in the crontab:
| |
− | * <code>env PATH=/usr/bin:/bin:/usr/local/bin /usr/local/bin/sa-update -v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart</code>
| |
− | The reason I override the cron environment's default path of <code>/usr/bin:/bin</code> is because sa-update needs to run the GPG tools in <code>/usr/local/bin</code>.
| |
− | | |
− | However, like I said, instead of a cron job, I'm using sa-utils now.
| |
− | | |
− | ===tt-rss===
| |
− | The www/tt-rss port is [http://tt-rss.org/redmine/projects/tt-rss/wiki/InstallationNotes Tiny Tiny RSS], a web-based feed reader I'm now using instead of Google Reader.
| |
− | | |
− | * <code>portmaster www/tt-rss</code>
| |
− | * <code>mysql -p<var>YYYYY</var></code>
| |
− | ** <code>create database ttrss;</code>
| |
− | ** <code>connect ttrss;</code>
| |
− | ** <code>source /usr/local/www/tt-rss/schema/ttrss_schema_mysql.sql;</code>
| |
− | ** <code>quit;</code>
| |
− | | |
− | * edit <code>/usr/local/www/tt-rss/config.php</code>:
| |
− | ** DB_USER needs to be <code>root</code> (I didn't bother creating a special user...)
| |
− | ** DB_NAME needs to be <code>ttrss</code>
| |
− | ** DB_PASS needs to be whatever's appropriate for DB_USER
| |
− | ** DB_PORT needs to be <code>3306</code>
| |
− | ** SELF_URL_PATH needs to be whatever is appropriate
| |
− | ** FEED_CRYPT_KEY needs to be 24 random characters
| |
− | ** REG_NOTIFY_ADDRESS needs to be a real email address
| |
− | ** SMTP_FROM_ADDRESS needs to at least have your real domain
| |
− | * <code>cp /usr/local/share/tt-rss/httpd-tt-rss.conf /usr/local/etc/apache22/Includes/</code>
| |
− | * <code>/usr/local/etc/rc.d/apache22 reload</code>
| |
− | | |
− | * visit <nowiki>http://yourdomain/tt-rss/</nowiki>
| |
− | <pre>Startup failed
| |
− | Tiny Tiny RSS was unable to start properly. This usually means a misconfiguration
| |
− | or an incomplete upgrade. Please fix errors indicated by the following messages:
| |
− | | |
− | FEED_CRYPT_KEY requires mcrypt functions which are not found.</pre>
| |
− | | |
− | The solution, after making sure mcrypt isn't mentioned in <code>/usr/ports/www/tt-rss/Makefile</code>:
| |
− | * <code>portmaster security/php5-mcrypt</code>
| |
− | * <code>/usr/local/etc/rc.d/apache22 restart</code>
| |
− | * visit <nowiki>http://yourdomain/tt-rss/</nowiki> and you should get a login screen. u: ''admin'', p: ''password''.
| |
− | * Actions > Preferences > Users. Select checkbox next to admin, choose Edit. Enter new password in authentication box.
| |
− | | |
− | The password is accepted, but subsequent accesses to all but the main Preferences page result in <code>"{"error":{"code":6}}"</code>. There's nothing in the ttrss_error_log table in the database. Apache error log shows a few weird things, but nothing directly related:
| |
− | <pre>File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php
| |
− | File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php
| |
− | File does not exist: /usr/local/www/tt-rss/false, referer: https://skew.org/tt-rss/prefs.php
| |
− | File does not exist: /www/skew.org/"images, referer: https://skew.org/tt-rss/index.php</pre>
| |
− | | |
− | Logging in again seems to take care of it, unless I change the password again. This only affects the admin user.
| |
− | | |
− | Create a new user, and login as that user. Subscribe to some feeds. Feeds won't update at all unless you double-click on their names, one by one.
| |
− | | |
− | Now the update daemon:
| |
− | | |
− | * In <code>/etc/rc.conf</code>, add <code>ttrssd_enable="YES"</code>
| |
− | * <code>/usr/local/etc/rc.d/ttrssd start</code>
| |
− | | |
− | Feeds should now update automatically, as per the interval defined in Actions > Preferences > Default feed update interval. Minimum value for this, though, is 15 minutes. This can also be overridden on a per-feed basis.
| |
− | | |
− | Themes are installed by putting uniquely named .css files (and any supporting files & folder) in tt-rss's <code>themes/</code> directory. I decided to try [https://github.com/naeramarth7/clean-greader clean-greader] for a Google Reader-like experience. It works great, but I'm not happy with some of it, especially its thumbnail-izing of the first image in the feed content, so I use the Actions > Preferences > Customize button and paste in this CSS:
| |
− | | |
− | <pre>/* use a wider view for 1680px width screens, rather than 1200px (see also 1180px setting below) */
| |
− | #main { max-width: 1620px; }
| |
− | | |
− | /* preferences help text should be formatted like tt-rss.css says, and make it smaller & italic */
| |
− | div.prefHelp {
| |
− | color : #555;
| |
− | padding : 5px;
| |
− | font-size: 80%;
| |
− | font-style: italic;
| |
− | }
| |
− | | |
− | /* tidy up feed title bar, especially to handle feed icons, which come in wacky sizes */
| |
− | img.tinyFeedIcon { height: 16px; }
| |
− | div.cdmFeedTitle {
| |
− | background-color: #eee;
| |
− | padding-left: 2px;
| |
− | height: 16px; }
| |
− | a.catchup {
| |
− | padding-left: 1em;
| |
− | color: #cdd;
| |
− | font-size: 75%;
| |
− | font-style: italic;
| |
− | }
| |
− | | |
− | /* Narrower left margin (44px instead of 71px), greater width (see also #main above) */
| |
− | .claro .cdm.active .cdmContent .cdmContentInner,
| |
− | .claro .cdm.expanded .cdmContent .cdmContentInner {
| |
− | padding: 0 8px 0 50px;
| |
− | max-width: 1180px;
| |
− | }
| |
− | | |
− | /* main feed image is often real content, e.g. on photo blogs, so don't shrink it */
| |
− | .claro .cdm.active .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img,
| |
− | .claro .cdm.active .cdmContent .cdmContentInner p:first-of-type img,
| |
− | .claro .cdm.active .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child,
| |
− | .claro .cdm.expanded .cdmContent .cdmContentInner div[xmlns="http://www.w3.org/1999/xhtml"]:first-child a img,
| |
− | .claro .cdm.expanded .cdmContent .cdmContentInner p:first-of-type img,
| |
− | .claro .cdm.expanded .cdmContent .cdmContentInner > span:first-child > span:first-child > img:first-child {
| |
− | float: none;
| |
− | margin: 0 0 16px 0 !important;
| |
− | max-height: none;
| |
− | max-width: 100%;
| |
− | }
| |
− | | |
− | /* scroll bars are too hard to see by default */
| |
− | ::-webkit-scrollbar-track {
| |
− | background-color: #ccc;
| |
− | }
| |
− | ::-webkit-scrollbar-thumb {
| |
− | background-color: #ddd;
| |
− | }
| |
− | </pre>
| |
− | | |
− | ===py-fail2ban===
| |
− | After installing the port, create <code>/usr/local/etc/fail2ban/action.d/bsd-route.conf</code> with the following contents:
| |
− | <pre># Fail2Ban configuration file
| |
− | #
| |
− | # Author: Michael Gebetsroither, amended by Mike J. Brown
| |
− | #
| |
− | # This is for blocking whole hosts through blackhole routes.
| |
− | #
| |
− | # PRO:
| |
− | # - Works on all kernel versions and as no compatibility problems (back to debian lenny and WAY further).
| |
− | # - It's FAST for very large numbers of blocked ips.
| |
− | # - It's FAST because it Blocks traffic before it enters common iptables chains used for filtering.
| |
− | # - It's per host, ideal as action against ssh password bruteforcing to block further attack attempts.
| |
− | # - No additional software required beside iproute/iproute2
| |
− | #
| |
− | # CON:
| |
− | # - Blocking is per IP and NOT per service, but ideal as action against ssh password bruteforcing hosts
| |
− | | |
− | [Definition]
| |
− | | |
− | # Option: actionstart
| |
− | # Notes.: command executed once at the start of Fail2Ban.
| |
− | # Values: CMD
| |
− | #
| |
− | actionstart =
| |
− | | |
− | | |
− | # Option: actionstop
| |
− | # Notes.: command executed once at the end of Fail2Ban
| |
− | # Values: CMD
| |
− | #
| |
− | actionstop =
| |
− | | |
− | | |
− | # Option: actioncheck
| |
− | # Notes.: command executed once before each actionban command
| |
− | # Values: CMD
| |
− | #
| |
− | actioncheck =
| |
− | | |
− | | |
− | # Option: actionban
| |
− | # Notes.: command executed when banning an IP. Take care that the
| |
− | # command is executed with Fail2Ban user rights.
| |
− | # Tags: See jail.conf(5) man page
| |
− | # Values: CMD
| |
− | #
| |
− | actionban = route -q add <ip> 127.0.0.1 <routeflags>
| |
− | | |
− | | |
− | # Option: actionunban
| |
− | # Notes.: command executed when unbanning an IP. Take care that the
| |
− | # command is executed with Fail2Ban user rights.
| |
− | # Tags: See jail.conf(5) man page
| |
− | # Values: CMD
| |
− | #
| |
− | actionunban = route -q delete <ip> 127.0.0.1
| |
− | | |
− | [Init]
| |
− | | |
− | # Option: routeflags
| |
− | # Note: Space-separated list of flags, which can be -blackhole or -reject
| |
− | # Values: STRING
| |
− | blocktype = -blackhole</pre>
| |
− | | |
− | Also create <code>/usr/local/etc/fail2ban/jail.local</code>. In it, you can override examples in <code>jail.conf</code>, and add your own:
| |
− | | |
− | <pre>[apache-badbots]
| |
− | enabled = true
| |
− | filter = apache-noscript
| |
− | action = bsd-route
| |
− | sendmail-buffered[name=apache-badbots, lines=5, dest=root@yourdomain]
| |
− | logpath = /var/log/www/*/*error_log
| |
− | | |
− | [apache-noscript]
| |
− | enabled = true
| |
− | filter = apache-noscript
| |
− | action = bsd-route
| |
− | sendmail-whois[name=apache-noscript, dest=root@yourdomain]
| |
− | logpath = /var/log/www/*/*error_log
| |
− | | |
− | [sshd]
| |
− | enabled = true
| |
− | filter = bsd-sshd
| |
− | action = bsd-route
| |
− | sendmail-whois[name=sshd, dest=root@yourdomain]
| |
− | logpath = /var/log/auth.log
| |
− | maxretry = 6
| |
− | | |
− | [sendmail]
| |
− | enabled = true
| |
− | filter = bsd-sendmail
| |
− | action = bsd-route
| |
− | sendmail-whois[name=sendmail, dest=root@yourdomain]
| |
− | logpath = /var/log/maillog</pre>
| |
− | | |
− | Be sure to replace ''yourdomain''. Check for errors with the command <code>fail2ban-client -d | grep '^ERROR' || echo no errors.</code>
| |
− | | |
− | In <code>/etc/rc.conf</code>, add the line <code>fail2ban_enable="YES"</code> and then run <code>/usr/local/etc/rc.d/fail2ban start</code>
| |
− | | |
− | Disable any cron jobs that were doing work that you expect fail2ban to now be doing.
| |
− | | |
− | Check your log rotation scripts to make sure they create new, empty files as soon as they rotate the old logs out. Apache HTTPD, for example, won't create a new log until there's something to put in it, and if fail2ban notices the logfile is missing for too long, it will disable the jail.
| |
− | | |
− | Because you're going to get mail from fail2ban@yourdomain, set up an alias for this account so that any bounces (e.g. due to network problems) will go to the alias.
| |
− | | |
− | ===Rootkit Hunter===
| |
− | | |
− | Install the program and set up its database:
| |
− | | |
− | * <code>portmaster security/rkhunter</code> – this will install wget as well.
| |
− | * <code>rehash</code>
| |
− | * <code>rkhunter --propupd</code>
| |
− | * <code>rkhunter --update</code>
| |
− | | |
− | Run the program once to see if it finds anything:
| |
− | | |
− | * <code>rkhunter --check</code>
| |
− | | |
− | As per the [http://rkhunter.cvs.sourceforge.net/viewvc/rkhunter/rkhunter/files/FAQ Rootkit Hunter FAQ], assuming nothing looks wrong, but you got warnings about script replacement, generate a list of <code>SCRIPTWHITELIST</code> entries for you to manually add to the appropriate section of /usr/local/etc/rkhunter.conf:
| |
− | | |
− | * <code>awk -F"'" '/replaced by a script/ {print "SCRIPTWHITELIST="$2}' /var/log/rkhunter.log</code>
| |
− | | |
− | There are more examples at the bottom of the FAQ.
| |
− | | |
− | Beware if you are running rkhunter from an interactive shell and have aliased 'ls' and/or have it configured for color, the unexpected output may not be parsed properly during the 'filesystem' tests, and you will get bogus warnings about hidden directories:
| |
− | <pre>[04:04:54] Warning: Hidden directory found: ?[1m?[38;5;6m/usr/.?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/usr/.^[[39;49m^[[m' (No such file or directory)
| |
− | [04:04:54] Warning: Hidden directory found: ?[1m?[38;5;6m/usr/..?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/usr/..^[[39;49m^[[m' (No such file or directory)
| |
− | [04:04:55] Warning: Hidden directory found: ?[1m?[38;5;6m/etc/.?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/etc/.^[[39;49m^[[m' (No such file or directory)
| |
− | [04:04:55] Warning: Hidden directory found: ?[1m?[38;5;6m/etc/..?[39;49m?[m: cannot open `^[[1m^[[38;5;6m/etc/..^[[39;49m^[[m' (No such file or directory)</pre>
| |
− | | |
− | If it happens to you, do whatever is needed to get 'ls' to behave normally, or add <code>filesystem</code> to the <code>DISABLE_TESTS</code> line in /usr/local/etc/rkhunter.conf.
| |
− | | |
− | The port adds a script to /usr/local/etc/periodic/security. You can enable it by adding to /etc/periodic.conf:
| |
− | | |
− | <pre>daily_rkhunter_update_enable="YES"
| |
− | daily_rkhunter_update_flags="--update --nocolors"
| |
− | daily_rkhunter_check_enable="YES"
| |
− | daily_rkhunter_check_flags="--cronjob --rwo"
| |
− | </pre>
| |
− | | |
− | Alternatively, you can just add this to root's crontab:
| |
− | <pre># run Rootkit Hunter every day at 1:06am
| |
− | 06 01 * * * /usr/local/bin/rkhunter --cronjob --update --rwo
| |
− | </pre>
| |
− | | |
− | ==Upgrading specific ports==
| |
− | Certain installed ports (3rd-party software packages) require extra attention when you want to update them with portmaster. Because of this, you can't just update all of your third-party software in one swoop; it's best to do certain ones separately. Here are some notes for the more difficult ones I ran across.
| |
− | | |
− | ===Upgrade Perl and Perl modules===
| |
− | Instructions for major and minor version updates are separate entries in /usr/ports/UPDATING. One thing they didn't make at all clear is that (prior to 2013-06-12), <code>perl-after-upgrade</code> is supposed to be run <em>after</em> updating modules; it won't find anything to do otherwise. So, to go from 5.12 to 5.16, I did this:
| |
− | # <code>portmaster -o lang/perl5.16 lang/perl5.12</code>
| |
− | # <code>portmaster p5-</code>
| |
− | # <code>perl-after-upgrade -f</code>
| |
− | # Inspect the old version's folders under <code>/usr/local/lib/perl5</code> and <code>/usr/local/lib/perl5/site_perl</code>. Anything left behind, aside from empty folders, probably means some modules need to be manually reinstalled.
| |
− | | |
− | When there's a perl patchlevel update (e.g. 5.16.2 to 5.16.3), UPDATING might tell you to upgrade ''everything'' Perl-related via <code>portmaster -r perl</code>. I'm not a big fan of this. Somehow, pretty-much ''everything'' on the system is tied to Perl, including Apache, MediaWiki, you name it. I don't understand why.
| |
− | | |
− | It is possible to upgrade just Perl itself, and the modules:
| |
− | # <code>portmaster perl</code>
| |
− | # <code>portmaster p5-</code>
| |
− | # <s><code>perl-after-upgrade -f</code></s>
| |
− | | |
− | perl-after-upgrade doesn't exist anymore. Starting with Perl 5.12.5 / 5.14.3 / 5.16.3, they dropped the patchlevel from the folder names in <code>/usr/local/lib/perl5</code> and <code>/usr/local/lib/perl5/site_perl</code>, and the installer handled it automatically.
| |
− | | |
− | ===Update SpamAssassin and related===
| |
− | The SpamAssassin port is now mail/spamassassin, not mail/p5-Mail-SpamAssassin. See UPDATING.
| |
− | | |
− | For the options I've chosen, this will update various Perl modules, gettext, libiconv, curl, libssh2, ca_root_nss, gnupg1.
| |
− | *<s><code>portmaster --packages mail/p5-Mail-SpamAssassin</code></s>
| |
− | *<code>portmaster --packages mail/spamssassin</code>
| |
− | | |
− | The port is rather clumsy in that it deletes <code>/usr/local/etc/mail/spamassassin/sa-update-keys</code>, so after the update, I have to re-import the GPG key for the "sought" ruleset.
| |
− | | |
− | *<code>fetch <nowiki>http://yerp.org/rules/GPG.KEY</nowiki> && sa-update --import GPG.KEY && rm GPG.KEY</code>
| |
− | | |
− | I [http://lists.freebsd.org/pipermail/freebsd-ports/2013-July/084832.html asked about this] on the mailing list, and cc'd the port maintainer, but no word yet.
| |
− | | |
− | If everything has installed correctly, restart sa-spamd when it's done. It probably stopped running during the install.
| |
− | | |
− | As of 3.4.0, if your system doesn't support IPv6, spamc will complain that it can't connect to spamd on ::1. To work aroudn this, you need to add the new <code>-4</code> flag (to force/prefer IPv4) in two places:
| |
− | * <code>/usr/local/etc/mail/spamassassin/spamc.conf</code>
| |
− | * <code>spamd_flags</code> in <code>/etc/rc.conf</code>
| |
− | | |
− | ===Update MySQL===
| |
− | Oracle is now calling it MySQL Community Server.
| |
− | | |
− | Don't update more than one minor version at a time (e.g., the docs say go from 5.5 to 5.6 before going to 5.7).
| |
− | | |
− | The actual databases shouldn't be affected by a minor version bump of MySQL. But of course, you should still consider making a fresh backup first:
| |
− | * <code>mysqldump -E -u<var>XXXXX</var> -p<var>YYYYY</var> --all-databases | bzip2 -c -q > /tmp/mysql-backup.sql.bz2</code>
| |
− | | |
− | Here's what I did when going from 5.5 to 5.6. I'm not sure it was really necessary to stop the 5.5 server and delete the 5.5 packages, but it seemed like a good idea in case there would be conflicts.
| |
− | | |
− | * <code>service mysql-server stop</code>
| |
− | * <code>pkg delete -f mysql\*</code> (if you don't do the -f it will also try to remove dependencies like mediawiki)
| |
− | * <code>portmaster -d databases/mysql-server56</code> (the client's dependencies now include Python and libxml, so it takes a while)
| |
− | * <code>service mysql-server start</code>
| |
− | * <code>mysql_upgrade -uXXXXX -pYYYYY</code>
| |
− | * <code>service mysql-server restart</code>
| |
− | | |
− | You should make sure MediaWiki and any other MySQL-dependent apps still work after doing this.
| |
− | | |
− | ====MySQL backup script====
| |
− | This simple script I wrote keeps a week's worth of daily backups of the database. I run it every day via <code>cron</code>.
| |
− | | |
− | <var>MYSQLUSER</var> and <var>MYSQLPASSWD</var> must be set to real values, not XXXXX and YYYYY; and <var>DUMPDIR</var> and <var>ARCHIVEDIR</var> must point to writable directories.
| |
− | | |
− | If there's a more secure way of handling this, let me know!
| |
− | | |
− | <pre>#!/bin/sh
| |
− | | |
− | DUMPDIR=/usr/backup/mysql/daily
| |
− | ARCHIVEDIR=/usr/backup/mysql/weekly
| |
− | MYSQLUSER=root
| |
− | MYSQLPASSWD="put_your_password_here"
| |
− | # Monday=1, Sunday=7
| |
− | ARCHIVEDAY=7
| |
− | | |
− | DATE=`/bin/date "+%Y%m%d"`
| |
− | BZIP=/usr/bin/bzip2
| |
− | DUMPER=/usr/local/bin/mysqldump
| |
− | DAYOFWEEK=`/bin/date "+%u"`
| |
− | CHECKER=/usr/local/bin/mysqlcheck
| |
− | | |
− | # Create an empty file named '.offline' in the document root folder of each
| |
− | # website that needs to not be accessing the database during the backup.
| |
− | # This assumes the web server config or index scripts in those folders will
| |
− | # temporarily deny access as appropriate.
| |
− | touch /usr/local/www/mediawiki/.offline
| |
− | touch /usr/local/www/tt-rss/.offline
| |
− | | |
− | set clobber
| |
− | if [ -d ${DUMPDIR} -a -w ${DUMPDIR} -a -x ${DUMPER} -a -x ${BZIP} ] ; then
| |
− | OUTFILE=${DUMPDIR}/mysql-backup-${DATE}.sql.bz2
| |
− | echo "Backing up MySQL databases to ${OUTFILE}..."
| |
− | # -E added 2013-04-17 to get rid of warning about events table not being dumped
| |
− | ${DUMPER} -E -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --add-drop-database | ${BZIP} -c -q > ${OUTFILE}
| |
− | else
| |
− | echo "There was a problem with ${DUMPDIR} or ${DUMPER} or ${BZIP}; check existence and permissions."
| |
− | exit 1
| |
− | fi
| |
− | | |
− | if [ -d ${ARCHIVEDIR} ] ; then
| |
− | if [ ${DAYOFWEEK} -eq ${ARCHIVEDAY} ] ; then
| |
− | echo "It's archive day. Archiving ${OUTFILE}..."
| |
− | /bin/cp -p ${OUTFILE} ${ARCHIVEDIR}
| |
− | echo "Deleting daily backups older than 1 week..."
| |
− | /usr/bin/find ${DUMPDIR} -mtime +7 -exec rm -v {} \;
| |
− | fi
| |
− | else
| |
− | echo "Today would have been archive day, but ${ARCHIVEDIR} does not exist."
| |
− | exit 1
| |
− | fi
| |
− | | |
− | if [ -x ${CHECKER} ] ; then
| |
− | echo "Checking & repairing tables..."
| |
− | ${CHECKER} -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --medium-check --auto-repair --silent
| |
− | echo "Optimizing tables..."
| |
− | ${CHECKER} -u${MYSQLUSER} -p${MYSQLPASSWD} --all-databases --optimize --silent
| |
− | echo "Done."
| |
− | fi
| |
− | | |
− | # Remove the '.offline' files
| |
− | rm -f /usr/local/www/mediawiki/.offline
| |
− | rm -f /usr/local/www/tt-rss/.offline
| |
− | </pre>
| |
− | | |
− | One downside of this script is that even on my small database, it takes a little while to run, like 15 minutes or so. While it's running, the database tables are locked (read-only). You don't want your database-backed websites to be doing stuff until the dump is finished. So I temporarily take those sites offline by doing a <code>touch .offline</code> to create an empty file named ".offline" in each of the sites' root folders, and then when the backup is done, there's a <code>rm .offline</code> for each one. In those site folders is a "site temporarily offline for backups" HTML page and a .htaccess with the following:
| |
− | <pre>ErrorDocument 503 /.site_offline.html
| |
− | RewriteEngine On
| |
− | RewriteCond %{DOCUMENT_ROOT}/\.offline -f
| |
− | RewriteCond %{REQUEST_URI} !/\.site_offline\.html
| |
− | RewriteRule .* - [R=503,L]
| |
− | </pre>
| |
− | Really there's no reason to write the temporary .offline file in the server root; you could put it in /tmp or wherever, and make the first RewriteCond look for it there. You could also hard-code the path in that RewriteCond directive; %{DOCUMENT_ROOT} may not point where you want if you're using Alias directives.
| |
− | | |
− | ===Upgrade Apache from 2.2 to 2.4===
| |
− | | |
− | In mid-2014, Apache 2.4 became the default version in ports, and also db4 ports are deprecated. The only thing I had that was using db4 was apr (base libs needed by Apache), and it wasn't really using it, so I went ahead and just deleted the installed db4 versions, and added USE_BDB_VER=5 to my /etc/make.conf (apr can't use db6 yet).
| |
− | | |
− | Then I upgraded Apache to 2.4. It does require some Apache downtime and uninstalling 2.2 (!) because the 2.4 port will abort installation when it sees that some 2.2 files are in the way.
| |
− | | |
− | # remove any forcing of apache22 from /etc/make.conf
| |
− | # build apr + apache24 from ports
| |
− | # stop and delete apache22
| |
− | # install apache24
| |
− | # edit .conf files in /usr/local/etc/apache24 (see notes below)
| |
− | # upgrade lang/php5
| |
− | # install www/mod_php5 with same options as lang/php5 (yes, they split the Apache module into a separate port again!)
| |
− | # 'service apache24 start' and cross your fingers
| |
− | # in /etc/rc.conf, s/apache22_enable/apache24_enable/
| |
− | | |
− | Config file editing...
| |
− | | |
− | Every time you edit, use 'apachectl configtest' to check for problems. Some things to watch for:
| |
− | | |
− | * Many modules are not enabled by default, but you probably want to enable a bunch of them, like these: include_module, deflate_module, actions_module, rewrite_module, ssl_module and socache_shmcb_module, cgi_module, userdir_module, php5_module, any proxy modules you need.
| |
− | * For the most part, you can copy-paste everything from the apache22 files, but don't include any allow/deny directives. Use the new format as explained at https://httpd.apache.org/docs/trunk/upgrading.html
| |
− | * Remove "NameVirtualHost" lines; they do nothing (since 2.3.11) and are going away.
| |
− | | |
− | ===Update MediaWiki===
| |
− | General info: [http://www.mediawiki.org/wiki/Manual:Upgrading MediaWiki Manual: Upgrading]
| |
− | | |
− | This is updating the Mediawiki code (PHP, etc.), not the database.
| |
− | | |
− | You probably want to make a backup first. I already have daily MySQL backups, so I just do this:
| |
− | * <code>cp -pR /usr/local/www/mediawiki /tmp/mediawiki_backup</code>
| |
− | The new installation actually shouldn't clobber your old LocalSettings or anything else; the backup is just in case. However, any extensions probably need to be reinstalled because they're often tied to a specific version of MediaWiki.
| |
− | | |
− | This updates php (+related), imagemagick (+related), freetype (+related)
| |
− | * <code>portmaster -P www/mediawiki</code>
| |
− | | |
− | Assuming the above went well:
| |
− | * make sure there's nothing special in /usr/local/www/mediawiki/UPGRADE
| |
− | * <code>cd /usr/local/www/mediawiki/maintenance/</code>
| |
− | * <code>php update.php</code>
| |
− | | |
− | Manually install appropriate versions of all of the extensions mentioned in LocalSettings.php. Assuming there are no changes required in LocalSettings.php, this just involves unzipping them into the Extensions directory. The [http://www.mediawiki.org/wiki/Special:ExtensionDistributor site where you get the extensions] has installation instructions.
| |
− | | |
− | ====Blank pages after upgrading PCRE====
| |
− | In February 2014, after upgrading PCRE to 8.34 or higher, Mediawiki versions prior to 1.22.1 will serve up articles with empty content. This is due to a change in PCRE 8.34 that necessitates a patch to Mediawiki and a cache purge.
| |
− | | |
− | Symptoms:
| |
− | * empty content when viewing pages, but edit boxes have the content
| |
− | * HTTP error log shows these messages:<br><code>PHP Warning: preg_match_all(): Compilation failed: group name must start with a non-digit at offset 4 in /usr/local/www/mediawiki/includes/MagicWord.php on line 876<br>PHP Warning: Invalid argument supplied for foreach() in /usr/local/www/mediawiki/includes/MagicWord.php on line 877</code>
| |
− | | |
− | For reference:
| |
− | * Here's the Mediawiki [https://bugzilla.wikimedia.org/show_bug.cgi?id=58640 bug report]
| |
− | * Here's the [https://git.wikimedia.org/patch/mediawiki%2Fcore.git/b9f291e8cd5bb1450f7b1031aa17cf7775aa7e96 patch (sorta)] - I had to just copy-paste the <code>$it</code> and <code>$group</code> lines into <code>/usr/local/mediawiki/includes/MagicWord.php</code> around line 706 (exact spot varies), replacing the old <code>$group</code> line.
| |
− | | |
− | The fix takes effect immediately, but it doesn't affect cached pages, which will probably be any pages that were visited by anyone during the time the problem was happening. If you know what all these pages are, you can purge their cached copies one by one if you visit each one while logged in and load the page with <code>?action=purge</code> appended to the URL. Obviously, this is not convenient if most of your wiki is affected.
| |
− | | |
− | Instead, I did a mass purge by using the [https://www.mediawiki.org/wiki/Extension:PurgeCache PurgeCache extension] to do it. This required creating the <code>/usr/local/mediawiki/extensions/PurgeCache</code> folder and installing [https://svn.wikimedia.org/svnroot/mediawiki/trunk/extensions/PurgeCache/ 4 files] into it. Then I had to go to my user rights page at <code>Special:UserRights/''myusername''</code> and add myself to the developer group (which is deprecated, incidentally; another alternative would be to change the extension's code to require sysop group instead). Finally, I visited <code>Special:PurgeCache</code> and clicked the button to finish the cache purge.
| |
− | | |
− | ===Update tt-rss===
| |
− | | |
− | ====Via web interface====
| |
− | Updating tt-rss can be done from within the web interface, when logged in as Admin. Of this will mean the port is out of date, but I wanted to try it to see if it works. It does, but in the future I think I'll just use the port to update it.
| |
− | | |
− | First, make a backup:
| |
− | * <code>cp -pR /usr/local/www/tt-rss /usr/local/www/tt-rss.`date -j "+%Y%m%d"`</code>
| |
− | Now give tt-rss write permission:
| |
− | * <code>chgrp www /usr/local/www</code>
| |
− | * <code>chmod g+w /usr/local/www /usr/local/www/tt-rss</code>
| |
− | It will make its own backup. The update will be a fresh installation in the tt-rss directory. When the update is done, copy your themes and any other customized files over from the backup. I'd undo the permission change as well:
| |
− | * <code>chmod g-w /usr/local/www /usr/local/www/tt-rss*</code>
| |
− | This might be a good time to check to see if your themes also need to be updated.
| |
− | | |
− | Follow the instructions below to merge config.php changes and update the database.
| |
− | | |
− | ====Via ports====
| |
− | You can use portmaster on it like normal. However, it will probably cause some PHP and its modules to update, and it will overwrite the old tt-rss installation. It does leave your config.php alone, but it's up to you to merge in any changes from config.php-dist.
| |
− | | |
− | To do an interactive merge:
| |
− | * <code>mv config.php config.php.old</code>
| |
− | * <code>sdiff -d -w 100 -osdiff -d -w 100 -o config.php config.php-dist config.php.old</code>
| |
− | | |
− | Now edit config.php, and set <var>SINGLE_USER_MODE</var> to <code>true</code>. Visit the site and see if you're prompted to do a database upgrade. If so, click through.
| |
− | | |
− | If everything is working, restart the feed update daemon:
| |
− | * <code>/usr/local/etc/rc.d/ttrssd restart</code>
| |
− | | |
− | Edit config.php to set <var>SINGLE_USER_MODE</var> back to <code>false</code>, and test again.
| |
− | | |
− | ====Fresh install via ports====
| |
− | | |
− | My PHP upgrade (see below) obliterated my old tt-rss installation, but thankfully left the old config file and themes behind. Here's what I did:
| |
− | | |
− | * portmaster www/tt-rss - installs php56-pcntl, php56-curl, php56-xmlrpc, php56-posix - Now you have a not-quite-up-to-date snapshot...good enough for now, but you have to use git to stay current. :/
| |
− | * copy config.php from old installation BUT SET SINGLE_USER_MODE or you'll get an access level error on login
| |
− | * install latest clean-greader theme
| |
− | * visit the installation in a web browser - "FEED_CRYPT_KEY requires mcrypt functions which are not found."
| |
− | * portmaster security/php56-mcrypt
| |
− | * service apache24 restart
| |
− | * visit the installation in a web browser - follow prompt to perform updates
| |
− | * unset SINGLE_USER_MODE
| |
− | * visit again and make sure it works
| |
− | * service ttrssd restart
| |
− | | |
− | ===Update PHP===
| |
− | This was how I did the PHP upgrade from 5.4 to 5.6 (roughly):
| |
− | * pkg delete '*php5*' - this deletes mediawiki and tt-rss too
| |
− | * cd /usr/ports/www/mediawiki && make config - I disabled ImageMagick
| |
− | * for php56 config: xcache is the only speedup option that works with 5.6 (no pecl or whatever the other one is). I enabled it
| |
− | * portmaster www/mediawiki
| |
− | * follow instructions to copy xcache.ini to where it goes. I set an admin username and pw hash in it.
| |
− | * portmaster www/mod_php56
| |
− | * portmaster www/php56_hash - needed for mediwiki logins to work, but wasn't installed for some reason
| |
− | * cd /usr/local/www/mediawiki/maintenance
| |
− | * php update.php - didn't work at first because it wasn't using AdminSettings.php. Solution= in LocalSettings.php require_once("AdminSettings.php");
| |
− | * service apache24 restart
| |
− | * see above for tt-rss
| |
− | | |
− | ==Upgrade to pkgng==
| |
− | In November 2013, I decided to upgrade from the stock pkg_install tools to the new pkgng, aka <code>pkg</code>. I followed the instructions in [http://lists.freebsd.org/pipermail/freebsd-ports-announce/2013-October/000068.html the announcement] and all went well, except I had to write to the author of that announcement to learn that he meant to write <code>enabled: yes</code> instead of <code>enabled: "yes"</code>. If you include the quotes, the <code>pkg</code> command will warn about the value not being a boolean.
| |
− | | |
− | pkgng replaces the pkg_install tools, including <code>pkg_create</code>, <code>pkg_add</code>, and <code>pkg_info</code>. It doesn't remove them from your system; you just have to remember not to use them. Putting <code>WITH_PKGNG=yes</code> in your <code>/etc/make.conf</code> tells portmaster and other tools to use the new tool, <code>pkg</code>, which has a number of subcommands, e.g. <code>pkg info</code>.
| |
− | | |
− | ===Incompatibility with portmaster===
| |
− | I was hoping to also use packages when I upgrade my ports, but as of mid-December 2013, running <code>portmaster</code> with the <code>-P</code> or <code>--packages</code> option results in a warning: <code>Package installation support cannot be used with pkgng yet, it will be disabled</code>.
| |
− | | |
− | ==HTTPS support==
| |
− | Apache comes with HTTPS support (SSL) disabled by default. It's not too hard to enable, but configuration does require some effort, especially for a public server with name-based virtual hosts (i.e., serving different websites with different configurations as directed by the HTTP "Host:" header in incoming requests).
| |
− | | |
− | ===Upgrade OpenSSL===
| |
− | FreeBSD comes with libssl (OpenSSL) 0.9.x, which only supports TLS 1.0. You can get decent protection with that, but it's better to use OpenSSL 1.x and get TLS 2.0 and 3.0 support, which makes it a lot easier to have "perfect" forward secrecy. All you have to do is install the security/openssl port and then anything you compile that needs openssl will use the updated libs.
| |
− | | |
− | It's safe to build things like Apache, curl, and Spamassassin using the stock libssl and then rebuild them later after you upgrade libssl.
| |
− | | |
− | ===Get a certificate===
| |
− | To support HTTPS, your server needs an SSL certificate (cert). For a public server you don't want to use a self-signed cert; nobody will install it into their browser/OS's certificate store, and even if they do, their browser may still warn about how crappy the security is—the cipher may be strong, but no one can vouch for the cert's authenticity and trust. It's hard to explain, but it's kind of like how in journalism, a news outlet is unreliable if they don't publish corrections. A self-signed cert can't be revoked, for example if the server's private key is disclosed, but a "real" cert signed by a Certificate Authority (CA) can be.
| |
− | | |
− | To get a certificate, generally speaking, you have to:
| |
− | # generate a private key (basically a random number + optional passphrase to encrypt it)
| |
− | # use the private key to generate a Certificate Signing Request (CSR)
| |
− | # submit the CSR to a Certificate Authority (CA).
| |
− | | |
− | Usually you have to pay the CA some money, and they have to do some kind of verification that you are a valid point of contact for the domain. The simplest, "basic" or "Class 1" type of verification is they send a code to (e.g.) hostmaster@example.org (example.org actually being whatever domain you're seeking a cert for), and if you paste the code into a form on their website, they know you saw the email and they'll issue you a cert.
| |
− | | |
− | Of course if you are trying to do this on the cheap, you want a free cert, and doing a web search for ''free SSL certificate'' will get you lots of results, but mostly they will be only for services which offer free SSL certificates for S/MIME. These are specialized certificates for signing or encrypting email messages before they are sent. S/MIME certs can't be used for web servers, or for encrypting email an SMTP server's traffic.
| |
− | | |
− | Some CAs allow you to have ''them'' generate the private key and CSR for you. I don't recommend doing that, because it's better to know that ''only you'' have your private key and that the key and the CSR were generated on computers ''you control''. So just generate your own key and CSR, and copy-paste that into the CA's web form.
| |
− | | |
− | ===Think about the security of your private key===
| |
− | If anyone ever gets a copy of your private key ''and'' they know (or can easily guess) the passphrase you used to encrypt it, then your key and all certs associated with it should be considered compromised. So, think about where you are storing the private key. How secure is that computer it's on? Is the passphrase written down somewhere? Is it easy to guess if someone has access to your other files? Hopefully it's not stored in plain text on the same box!
| |
− | | |
− | If your key is ever compromised, you have to revoke the certificates that were signed with it. Your CA should have a process for doing that and they shouldn't charge extra for it.
| |
− | | |
− | ===Generate a private key===
| |
− | * <code>openssl genrsa -out ssl.key 2048</code>
| |
− | Some considerations:
| |
− | * Use a passphrase? No. This would make it more secure, but then you'd have to enter it every time Apache is started or sent a SIGHUP.
| |
− | * How many bits? Some tutorials say 1024, but 2048 is pretty standard now, so use 2048. More bits means more CPU cycles needed for encryption, so I'm hesitant to use 4096 (my server is running on old hardware), lest it slow things down too much. However, I've read that encryption overhead really isn't that high, even on busy servers, so maybe it's no big deal to use 4096.
| |
− | | |
− | ===Generate a CSR===
| |
− | * <s><code>openssl req -new -key server.key -out server.csr -sha1</code></s>
| |
− | | |
− | SHA1 is crackable now, so you need to use SHA256; see https://community.qualys.com/blogs/securitylabs/2014/09/09/sha1-deprecation-what-you-need-to-know
| |
− | | |
− | You'll be prompted to enter country, state/province, locality, organization name, organizational unit name—these can be blank or filled in as you wish (although I found that I had to enter country/state/locality). Then you enter the Common Name (CN), which should be the "main" domain name the cert is for. If it's a wildcard cert, the CN would be something like "*.example.com". Otherwise it needs to match the main domain name that people will be using to access the server. Some registrars might want you to use a FQDN ("something.example.com").
| |
− | | |
− | You'll also be prompted to enter an email address that will be in the cert; I suggest something that works but isn't too revealing, like root or hostmaster at your domain.
| |
− | | |
− | If prompted for a challenge password, this is a password that you create and give to the CA. They can then use it in order to verify you in future interactions with them. It's a way to protect against someone impersonating you when they talk to the issuer.
| |
− | | |
− | Optional company name is probably for if your company is requesting the cert on behalf of someone else. I just leave it blank.
| |
− | | |
− | Now you have a text file, <code>server.csr</code>, the contents of which you'll copy-paste or otherwise upload to the CA.
| |
− | | |
− | ===Get your cert from the CA===
| |
− | Turn off any ad or script blockers when accessing the CA's website.
| |
− | | |
− | If you're new there, you'll probably have to verify an email address (doesn't matter what it is, as long as you can get the code they send you) and paste a validation code into a form. They may also try to make your browser accept an SSL cert for authenticating you. Think of it as an extra-special cookie.
| |
− | | |
− | Once you're in, follow whatever procedures they have laid out. Probably they will want to validate your domain. This requires them sending a validation code to an email address at the domain in question (e.g. hostmaster@yourdomain.com), and then you tell them what code you received. After the domain is validated, you give them your CSR text.
| |
− | | |
− | I found that when working with one particular CA to get a non-wildcard cert, if I generated a CSR for a bare domain (example.org), the CA required that I enter a FQDN (something.example.org). The resulting cert contained something.example.org as the CN and example.org as a Subject Alternative Name (meaning, "also good for this domain"). It worked fine.
| |
− | | |
− | If everything goes well, the CA will give you your requested cert (e.g. <code>ssl.crt</code>), along with root and intermediate certs (maybe in one file). You will need to tell Apache where all of these files are. The CA probably has instructions on their site.
| |
− | | |
− | ===Configure Apache HTTPD===
| |
− | Put the cert files wherever you want, just make sure that the folder and files are readable only by root.
| |
− | | |
− | Edit httpd.conf and uncomment the line that says something like
| |
− | <pre>Include etc/apache24/extra/httpd-ssl.conf</pre>
| |
− | | |
− | Edit <code>extra/httpd-ssl.conf</code> and comment out the <code><VirtualHost: _default_:443></code>...<code></VirtualHost></code> and its contents (aside from what's already commented-out). Here's the general idea of what you should add instead:
| |
− | | |
− | Enable name-based virtual host configs:
| |
− | <pre>NameVirtualHost *:443</pre>
| |
− | | |
− | Set up an alias for a desired [https://httpd.apache.org/docs/2.4/mod/mod_log_config.html#formats access-log format]. I want to use the standard "combined" format, with a couple of SSL-specific details appended:
| |
− | <pre>LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x" combined_plus_ssl</pre>
| |
− | | |
− | For each of the domains named in the certificate, you need a virtual host entry. You are mainly duplicating your httpd-vhosts.conf entries, but for port 443, with SSL stuff added, and (probably) different log file locations and formats.
| |
− | | |
− | In HTTPS, the client first establishes an unencrypted connection to port 443 at the server's IP address. This is just in order to negotiate encryption. Once this is done, the actual HTTP request is decrypted and handled.
| |
− | | |
− | When using a non-[http://en.wikipedia.org/wiki/Server_Name_Indication SNI]-capable browser, the initial, unencrypted connection does not have a hostname/domain (identifying the desired website) associated with it, so the first <code><VirtualHost></code> entry that matches the IP address and port 443 will be handling it, and the certificate defined in that entry must be the same as the one in the entry that will be handling the actual HTTP request. The HTTP-handling entry could be the same entry as the initial connection-handling entry, or it could be separate.
| |
− | | |
− | When the connection comes from an SNI-capable browser, then it will probably have a hostname/domain, so an SNI-capable server (like Apache 2.2.12 and up, built with OpenSSL 0.9.8j and up, which is standard since mid-2009) will simply use the <code><VirtualHost></code> entry with the corresponding ServerName for both the initial connection and the actual HTTP request.
| |
− | | |
− | Once the encrypted connection is established, the rest of the communication is ordinary HTTP requests that arrive encrypted. These are sent to port 443 at the same IP address, and are decrypted and handled like normal (but with these configs, not the ones for port 80). Each request should contain a <code>Host:</code> header to specify the hostname/domain. So the first <code><VirtualHost></code> entry does double-duty, handling the HTTP service for one of these domains:
| |
− | | |
− | <pre># This one will be for any encrypted requests on *:443 with
| |
− | # "Host: example.com:443" headers.
| |
− | #
| |
− | # By virtue of being first, this entry also applies to the initial connection on
| |
− | # *:443 (for non-SNI clients), and encrypted requests on *:443 with a missing or
| |
− | # unrecognized Host header.
| |
− | #
| |
− | VirtualHost *:443>
| |
− | ServerName example.com:443
| |
− | ServerAdmin root@example.com
| |
− | SSLEngine on
| |
− | SSLProtocol all -SSLv2 -SSLv3
| |
− | SSLCertificateKeyFile "/path/to/server.key"
| |
− | SSLCertificateFile "/path/to/ssl.crt"
| |
− | SSLCACertificateFile "/path/to/root.crt"
| |
− | Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
| |
− | DocumentRoot "/path/to/whatever"
| |
− | CustomLog "/path/to/whatever" combined_plus_ssl
| |
− | ErrorLog "/path/to/whatever"
| |
− | LogLevel notice
| |
− | </VirtualHost></pre>
| |
− | | |
− | SSLCACertificateFile is for the CA root cert. Some CAs issue intermediate certs in a file separate from the root cert. In that case, you'd have to refer to that intermediate cert file as SSLCertificateChainFile in your Apache config. But if the root and intermediate cert are in a single file, you just use SSLCACertificateFile by itself.
| |
− | | |
− | You're going to want LogLevel to be ''notice'' or higher, because there's a lot of noise in the SSL ''info''-level messages.
| |
− | | |
− | Of course <code>*</code> can be replaced with a specific IP address, if you want.
| |
− | | |
− | The rest of the VirtualHost entries are only for the specific <code>Host:</code> headers. Make sure there's one for each name the cert is good for.
| |
− | | |
− | <pre># This one will be for any encrypted requests on *:443 with
| |
− | # "Host: foo.example.com:443" headers, and for the initial
| |
− | # connection on *:443 by SNI-capable clients wanting foo.example.com.
| |
− | #
| |
− | # Don't forget to mirror any non-SSL, non-log changes here
| |
− | # with the corresponding *:80 entry in httpd-vhosts.conf.
| |
− | #
| |
− | <VirtualHost *:443>
| |
− | ServerName foo.example.com:443
| |
− | ServerAdmin root@example.com
| |
− | SSLEngine on
| |
− | SSLProtocol all -SSLv2 -SSLv3
| |
− | SSLCertificateKeyFile "/path/to/server.key"
| |
− | SSLCertificateFile "/path/to/ssl.crt"
| |
− | SSLCACertificateFile "/path/to/root.crt"
| |
− | Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"
| |
− | DocumentRoot "/path/to/whatever"
| |
− | CustomLog "/path/to/whatever" "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
| |
− | ErrorLog "/path/to/whatever"
| |
− | LogLevel notice
| |
− | </VirtualHost></pre>
| |
− | | |
− | Ref (non-SNI): https://wiki.apache.org/httpd/NameBasedSSLVHosts
| |
− | Ref (SNI): https://wiki.apache.org/httpd/NameBasedSSLVHostsWithSNI
| |
− | | |
− | It's a good idea to have entries for any other domains hosted on the same IPs. That is, every HTTP website should have some kind of HTTPS service as well. This has a couple of ramifications:
| |
− | * You will have to keep the <code>:443</code> <code><VirtualHost></code> entries in sync with the <code>:80</code> ones.
| |
− | * When people try to access the HTTPS versions of sites that the certificate isn't valid for, they'll get warnings in their browsers. If they choose to accept the certificate anyway, what do you want to do? In my opinion, the best thing to do is redirect to an HTTPS site that the certificate is good for, or if there's no such option, just redirect to the regular HTTP site. In either case, their initial request should still be handled with SSL:
| |
− | | |
− | <pre># People might try to access our hosted domains via HTTPS (port 443)
| |
− | # even if we don't have certs for those domains. They'll get the default
| |
− | # cert (as per the first VirtualHost entry) and despite the warning
| |
− | # in their browser, the user has the option of accepting it.
| |
− | # We want to redirect them to the appropriate, probably non-SSL location.
| |
− | #
| |
− | <VirtualHost *:443>
| |
− | ServerName non-ssl-host.example.com:443
| |
− | ... the usual SSL stuff goes here ...
| |
− | DocumentRoot "whatever"
| |
− | Redirect / http://non-ssl-host.yourdomain.org/
| |
− | CustomLog "whatever" combined_plus_ssl
| |
− | ErrorLog "whatever"
| |
− | LogLevel notice
| |
− | </VirtualHost></pre>
| |
− | | |
− | ===See if it works===
| |
− | * Visit your web sites with https URLs and see what happens.
| |
− | * Use a third-party SSL checker like [http://www.sslshopper.com/ssl-checker.html SSLShopper's SSL Checker].
| |
− | * If you use Firefox or Chrome, install the [http://www.eff.org/https-everywhere HTTPS Everywhere] extension, [[User:Mjb/HTTPS Everywhere|create a custom ruleset]] for it, then see if you get redirected to the https URL when you try to visit the http URL of your web site.
| |
− | | |
− | Something else to check for is mixed content. Ideally, an HTTPS-served page shouldn't reference any HTTP-served scripts, stylesheets, images, videos, etc.; browsers may warn about it. Replace any <code>http:</code> links in your HTML with relative links (for resources on the same site) or <code>https:</code> links (for resources that are verifiably available via HTTPS). For example, in MediaWiki's LocalSettings.php, I had to change <var>$wgRightsUrl</var> and <var>$wgRightsIcon</var> to use <code>https:</code> URLs. There may still be some external resources which are only available via HTTP, but if they're outside your control, there's nothing you can do about that.
| |
− | | |
− | ===Improvements===
| |
− | ====HSTS====
| |
− | HSTS is a lot like HTTPS Everywhere, but it comes standard in modern browsers. You enable HSTS on the server just by having it send a special header in its HTTPS responses. The header tells HSTS-capable browsers to only use HTTPS when accessing the site in the future. In the main configuation, you need
| |
− | * <code>LoadModule headers_module modules/mod_headers.so</code>
| |
− | On my system, this was already enabled. Then, in the <code><VirtualHost></code> section for each HTTPS site (not regular HTTP!), you need
| |
− | * <code>Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"</code>
| |
− | | |
− | Test it in your browser by disabling HTTPS Everywhere (if installed), then visit the HTTPS website, then try to visit the HTTP version of the site. The browser should change the URL back to use HTTPS automatically.
| |
− | | |
− | ====POODLE attack mitigation====
| |
− | The attack forces a downgrade to SSLv3, which is now too weak to be relied upon. You have to disable SSLv3. IE6 users will be locked out.
| |
− | * <code>SSLProtocol all -SSLv2 -SSLv3</code>
| |
− | | |
− | ====CRIME attack mitigation====
| |
− | This is an easy one. Just ensure TLS compression is not enabled. It normally isn't enabled, but just in case:
| |
− | * <code>SSLCompression off</code>
| |
− | | |
− | ====BEAST attack mitigation====
| |
− | * requires combo of <code>SSLProtocol</code> and <code>SSLCipherSuite</code>
| |
− | * use TLS 1.1 or higher, or (for TLS 1.0) only use RC4 cipher
| |
− | * you can't specify "RC4 for TLS 1.0, but no RC4 for TLS 1.1+" in mod_ssl
| |
− | * TLS 1.1+ can still be downgraded to 1.0 by a MITM!
| |
− | * RC4 has vulnerabilities, too!
| |
− | * Apache 2.2 w/mod_ssl is normally built w/OpenSSL 0.9.x, supporting TLS 1.0 only!
| |
− | | |
− | But wait, read on...
| |
− | | |
− | ====Perfect forward secrecy====
| |
− | Cipher suites using Diffie-Hellman key exchange ("DH") provide forward secrecy. "Perfect" forward secrecy (PFS) is an enhanced version of this policy.
| |
− | * it ensures session keys can't be cracked if private key is compromised
| |
− | * it requires ''ephemeral'' Diffie-Hellman key exchange ("EDH" or "DHE"), optionally with Elliptic Curve cryptography ("ECDHE" or "EECDH") to reduce overhead
| |
− | * ECDHE requires Apache 2.3.3+! (it's OK to leave it listed in 2.2's config though)
| |
− | * browser support varies
| |
− | | |
− | The basic config of
| |
− | * <code>SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5</code>
| |
− | gives me a pretty nice report with lots of green "Forward Secrecy" results on the [https://www.ssllabs.com/ssltest/analyze.html Qualys SSL Labs analyzer].
| |
− | | |
− | This gets more complicated if you want to mitigate the BEAST attack. There are suggestions [http://stackoverflow.com/questions/17308690/how-do-i-enable-perfect-forward-secrecy-by-default-on-apache][http://blog.ivanristic.com/2013/08/configuring-apache-nginx-and-openssl-for-forward-secrecy.html] for dealing with it through the use of SSLCipherSuite directives that prioritize RC4 if AES isn't available. However, this is not good for Apache 2.2, because you'll probably end up disabling forward secrecy for everyone.
| |
− | | |
− | Reference for <code>SSLCipherSuite</code>: [https://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslciphersuite here (click)]. It may help to know that on the command line, you can do <code>openssl ciphers -v</code> followed by the same parameters you give in the <code>SSLCipherSuite</code> directive, and it will tell you what ciphers match.
| |
− | | |
− | It's best to beef up your Diffie-Hellman setup by following the instructions at [https://weakdh.org/sysadmin.html https://weakdh.org/sysadmin.html]. In a nutshell:
| |
− | * <code>cd /etc/ssl</code>
| |
− | * <code>openssl dhparam -out dhparams.pem 2048</code>
| |
− | After a nice long wait for that to finish, make Apache use the new params and a new order of cipher suites. In /usr/local/etc/apache24/extra/httpd-ssl.conf:
| |
− | * <code>SSLOpenSSLConfCmd DHParameters "/etc/ssl/dhparams.pem"</code>
| |
− | * <code>SSLHonorCipherOrder on</code>
| |
− | * <code>SSLCipherSuite ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA</code>
| |
− | | |
− | ==SMTP Authentication and STARTTLS support in Sendmail==
| |
− | | |
− | FreeBSD comes with sendmail installed in the base system, with support for STARTTLS (the SMTP command that sets up encryption) disabled. You will get encryption support if you just tell sendmail where to find certificates.
| |
− | | |
− | To also do authentication—i.e. where authorized users log in to your server to have it deliver mail for them—you need to rebuild sendmail with support for the SASL libraries. Every time there is an update to the base system's sendmail, you'll have to do the rebuild in /usr/src, which can be a pain. Some administrators choose to install sendmail from the ports collection to make this easier, but that port is really mainly intended for helping upgrade sendmail installations on older systems.
| |
− | | |
− | ===Set up authentication===
| |
− | | |
− | In order to set up authentication, rebuild sendmail with support for the SASL libraries. Just follow the instructions in the [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/SMTP-Auth.html SMTP Authentication section] of the FreeBSD Handbook.
| |
− | | |
− | Where the handbook refers to editing freebsd.mc or the local .mc, I made sure to use <code>/etc/mail/`hostname`.mc</code>.
| |
− | | |
− | The handbook also suggests increasing the log level from its default of 9, but doesn't say how. You do it by adding this to the .mc file:
| |
− | | |
− | <pre>dnl log level
| |
− | define(`confLOG_LEVEL', `13')dnl</pre>
| |
− | | |
− | <div style="border: 1px solid black; margin-left: 2em; margin-bottom: 1em; background: #cccccc; padding: 1em; float: right; width: 30em;">As [[#Upgrade to a new patch level|mentioned previously]], any time you update the OS with <code>freebsd-update</code>, you will probably overwrite your custom builds of system binaries. So for example, if you have built Sendmail with SASL2, it will be clobbered by freebsd-update, so you will have to rebuild it!</div>
| |
− | At this point, do the <code>make install restart</code> as directed, just to make sure nothing broke. sendmail should start up quietly. Maybe send yourself a test message and make sure you can still receive mail OK. Feel free to tail the mail log and see what it says.
| |
− | | |
− | The outcome here, if I understand correctly, is this:
| |
− | * SMTP clients (email programs) can now ask to interact with my server as a local user (with their login password), in order to use my server as a relay for their outbound mail. (Your ISP may not appreciate this; I know mine insists that people use the ISP's own relays exclusively.)
| |
− | | |
− | Previously, to allow relaying, I had set up each user's home IP address as a valid <code>RELAY</code> in <code>/etc/mail/access</code>. Obviously authentication is better. However...
| |
− | | |
− | I think the handbook's advice, as given, is rather dangerous, because it says to override the default authentication methods, which [http://www.sendmail.org/~ca/email/auth.html the documentation] currently says are GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5. The handbook's advice omits KERBEROS_V4, which is no big deal, but then it also adds the LOGIN authentication method, which transmits the username and password in the clear (well, base64-encoded), which is a big deal if the connection isn't yet encrypted.
| |
− | | |
− | Regardless of whether you leave LOGIN (or PLAIN) in there, but ''especially'' if you do, I strongly suggest you also add this to the .mc file:
| |
− | | |
− | <pre>dnl SASL options:
| |
− | dnl f = require forward secrecy
| |
− | dnl p = require TLS before LOGIN or PLAIN auth permitted
| |
− | dnl y = forbid anonymous auth mechanisms
| |
− | define(`confAUTH_OPTIONS',`f,p,y')dnl</pre>
| |
− | | |
− | While you're in there, throw KERBEROS_V4 back in and change the comments to be more informative:
| |
− | | |
− | <pre>dnl authentication will be allowed via these mechanisms:
| |
− | define(`confAUTH_MECHANISMS', `GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl
| |
− | | |
− | dnl relaying will be allowed for users who authenticated via these mechanisms:
| |
− | TRUST_AUTH_MECH(`GSSAPI KERBEROS_V4 DIGEST-MD5 CRAM-MD5 LOGIN')dnl</pre>
| |
− | | |
− | ===Set up encryption===
| |
− | | |
− | Public key encryption via the STARTTLS command won't work until you tell sendmail where the private key and certificates are. So, in the .mc file add the following:
| |
− | | |
− | <pre>dnl certificate and private key paths for STARTTLS support
| |
− | define(`confCACERT_PATH', `/etc/mail/certs')dnl
| |
− | define(`confCACERT', `/etc/mail/certs/CAcert.pem')dnl
| |
− | define(`confSERVER_CERT', `/etc/mail/certs/MYcert.pem')dnl
| |
− | define(`confSERVER_KEY', `/etc/mail/certs/MYkey.pem')dnl
| |
− | define(`confCLIENT_CERT', `/etc/mail/certs/MYcert.pem')dnl
| |
− | define(`confCLIENT_KEY', `/etc/mail/certs/MYkey.pem')dnl</pre>
| |
− | | |
− | Also create the referenced directory and files. They must be readable only by owner, and symlinks are OK:
| |
− | * <code>mkdir -m 700 /etc/mail/certs</code>
| |
− | * <code>cd /etc/mail/certs</code>
| |
− | * <code>ln -s /''actual_path_to_CA_cert''/ssl.crt MYcert.pem</code>
| |
− | * <code>ln -s /''actual_path_to_my_private_key''/server.key MYkey.pem</code>
| |
− | * <code>ln -s /''actual_path_to_CA_root_cert''/root.crt CAcert.pem</code>
| |
− | | |
− | Now <code>make install restart</code> and tail the mail log, watching for errors. Also run the tests at [http://www.checktls.com/ checktls.com] ... for me, everything worked on the first try!
| |
− | | |
− | Outcomes:
| |
− | * SMTP clients (email programs and mail relays) that connect to my server anonymously in order to hand off mail for my users (or for other domains I relay to) ''can now request encryption and communicate securely''.
| |
− | * My SMTP server, when connecting to a remote SMTP server in order to deliver mail from my users, ''can now request encryption and communicate securely''.
| |
− | | |
− | ===Certificate limitations===
| |
− | [http://weldon.whipple.org/sendmail/wwstarttls.html I have read] that '''not all certificates work for STARTTLS'''.
| |
− | | |
− | Apparently you can run <code>openssl x509 -noout -purpose -in ''path_to_your_cert''</code> to see what "purposes" your cert is approved for. Here's the output for my AlphaSSL wildcard cert:
| |
− | | |
− | <pre>Certificate purposes:
| |
− | SSL client : Yes
| |
− | SSL client CA : No
| |
− | SSL server : Yes
| |
− | SSL server CA : No
| |
− | Netscape SSL server : Yes
| |
− | Netscape SSL server CA : No
| |
− | S/MIME signing : No
| |
− | S/MIME signing CA : No
| |
− | S/MIME encryption : No
| |
− | S/MIME encryption CA : No
| |
− | CRL signing : No
| |
− | CRL signing CA : No
| |
− | Any Purpose : Yes
| |
− | Any Purpose CA : Yes
| |
− | OCSP helper : Yes
| |
− | OCSP helper CA : No</pre>
| |
− | | |
− | I suspect "SSL client : Yes" is crucial.
| |
− | | |
− | ===Client certificate verification===
| |
− | What good is encryption if the client is being impersonated by some Man-in-the-Middle (MITM) who is choosing his favorite cipher and sending you his public key? The way to defend against this is to verify the client. But you also have to figure out what to do with unverifiable clients.
| |
− | | |
− | ====Certificates for trusted clients or their CAs are required on the server====
| |
− | Unless you configured the server not to request a certificate from the client, it will ask for one, and it will tell the client "I'm prepared to accept a certificate signed with these CA root certificates..." The certs it will accept are the root certs and self-signed certs that are in the <code>confCACERT</code> file, plus those that you have symlinks for in the <code>confCACERT_PATH</code> directory. The client will then decide whether it wants to offer the server a cert at all.
| |
− | | |
− | The Sendmail Installation and Operation Guide says you can't have the server accepting too many root certs, because the TLS handshake may fail. But it doesn't say how many is too many; it just says only include the CA cert that signed your own certs, plus any others you trust. I take this to mean that I'm not supposed to include the whole the Mozilla root cert bundle, i.e. <code>/usr/local/share/certs/ca-root-nss.crt</code>, as installed by the security/ca_root_nss port (which is maybe already on the system, as it is needed by curl, SpamAssassin, gnupg, etc.).
| |
− | | |
− | To verify a client cert signed by a CA, you need a copy of the CA root certificate ''and any intermediate certificates'' to be on the system. As many certs as you want can be concatenated together in the <code>confCACERT</code> file, or they can be in separate files represented by symlinks, named for the cert's hash, in the <code>confCACERT_PATH</code> directory. If intermediate certificates are present, they can be in separate files, too, or they can have the higher-level certs, on up to the root, concatenated to them in one file; e.g. GoDaddy has a <code>gd_bundle.crt</code> file available for this purpose, with the contents of <code>gd_intermediate.crt</code> followed by the contents of <code>gd-class2-root.crt</code>; the hash will be for the first cert in the bundle (i.e., the lowest-level intermediate cert).
| |
− | | |
− | To verify a self-signed client cert, I believe you need a copy of the self-signed cert to be on the system; it is treated like a CA root cert. It can live in the file with the root certs or it can have a symlink in the <code>confCACERT_PATH</code> directory.
| |
− | | |
− | Here is how to generate the appropriate symlink (but replace both instances of ''cert.crt'' with the path to the appropriate file):
| |
− | * <code>ln -s ''cert.crt'' `openssl x509 -noout -hash < ''cert.crt''`.0</code>
| |
− | | |
− | ====Verification results====
| |
− | When your server receives email via an encrypted connection, you will see something like this in the <code>Received:</code> headers:
| |
− | * <code>(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)</code>
| |
− | | |
− | Here are the possible client certificate verification codes:
| |
− | * <code>verify=OK</code> means that the verification succeeded.
| |
− | * <code>verify=NOT</code> means that the server didn't ask for a cert, probably because it was configured not to.
| |
− | * <code>verify=NO</code> means that the server asked for a cert, but the client didn't provide one, or it didn't provide the intermediate and root certs along with the client cert. Maybe the client isn't configured to send the whole bundle, or it doesn't have a client cert to provide, or maybe the client didn't like the list of acceptable CA root certs the server offered. This code is not cause for concern unless you were expecting to be able to verify that client because you have the necessary certs installed.
| |
− | * <code>verify=FAIL</code> means that the server asked for a cert, and the client provided one that couldn't be verified. Maybe it's expired, or the server doesn't have the necessary root and intermediate certs, or the certs it has don't have signatures that match those presented, or one of the certs presented is listed in the CRL file (if any).
| |
− | * Other codes are <code>NONE</code> (no STARTTLS command issued), <code>SOFTWARE</code> (TLS handshake failure), <code>PROTOCOL</code> (SMTP error), and <code>TEMP</code> (temporary, unspecified error).
| |
− | | |
− | By default, Sendmail doesn't care what the code is; it'll proceed with the transaction anyway, if possible. Depending on your needs, you can configure Sendmail to react to these codes.
| |
− | | |
− | Even if there is no verification, the transaction is still encrypted; there is just no certainty of the identity of the connecting host.
| |
− | | |
− | ====The biggest caveat====
| |
− | On a public MX host, you're required (by RFC 3207) not to reject relaying through unencrypted connections, so you can't really do much verification of clients.
| |
− | | |
− | A client may present you with valid certs, but if you don't have the necessary certs installed to verify them, that's your fault, not the client's. And you can't say that <code>verify=FAIL</code> is reason to refuse delivery, but then accept any other non-<code>verify=OK</code> codes. I mean, what's to stop the client from just trying again and deliberately triggering one of the other codes? e.g. it could not use STARTTLS at all, or not send a cert.
| |
− | | |
− | So really there's only a few choices (pick one):
| |
− | * Don't attempt verification at all.
| |
− | * Attempt verification of a handful of trusted hosts & root CAs, but only for informational purposes.
| |
− | * Require encrypted connections, attempt verification of a handful of trusted hosts & root CAs, and disallow relaying for those that don't get <code>verify=OK</code>. This is not an option for public servers.
| |
− | | |
− | ==Sendmail encryption related documentation of note==
| |
− | Official Sendmail docs:
| |
− | * /usr/share/sendmail/cf/README - massive doc explaining .mc & .cf files and all the options therein. Current copy online [http://web.mit.edu/freebsd/head/contrib/sendmail/cf/ at MIT].
| |
− | * /usr/share/sendmail/cf/cf/knecht.mc - Eric Allman's .mc file with many interesting things in it
| |
− | * (this is where it ends up on FreeBSD:) /usr/src/contrib/sendmail/doc/op/op.me - troff source for the ''Sendmail Installation and Configuration Guide''. On FreeBSD there's a Makefile in that folder, so you can <code>cd /usr/src/contrib/sendmail/doc/op/ && make op.ps op.txt op.pdf</code> to generate PostScript, ASCII (ugly), and PDF copies. A recent but not-quite-current PDF copy is [http://www.sendmail.com/pdfs/open_source/installation_and_op_guide.pdf at sendmail.com]. No one else seems to have it online, and very few sites refer to it, yet it's indispensable!
| |
− | | |
− | FreeBSD-specific:
| |
− | * /etc/mail/README - Mainly just explains how to work around an issue with getting it to work with jails.
| |
− | * [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/smtp-auth.html SMTP Authentication] - outdated chapter of the FreeBSD Handbook. The instructions for rebuilding Sendmail are good for enabling STARTTLS and AUTH, at least, but these docs need work.
| |
− | | |
− | Useful guides:
| |
− | * [http://yom.iaelu.net/2012/04/secured-sendmail-with-smtp.html Secured Sendmail with SMTP Authentication] Guillaume "yom" Bibaut's HOWTO
| |
− | * [http://weldon.whipple.org/sendmail/wwstarttls.html My Experiences (So Far) with STARTTLS and Sendmail] Weldon Whipple's trials and tribulations; covers certificate and other stuff in more depth than most, but also somewhat outdated (c. 2002)
| |
− | | |
− | Cyrus SASL-related:
| |
− | * http://www.postfix.org/SASL_README.html#server_cyrus - 95% good info about SASL, 5% Postfix-specific stuff you can ignore
| |
− | * [http://content.hccfl.edu/pollock/AUnixSec/SASLNotes.htm Configuring SASL] - Wayne Pollock's mostly excellent overview, only the "Available Mechanisms: PLAIN" section is outdated; the saslauthd man page explains what's really available.
| |
− | * [http://www.sendmail.org/~ca/email/cyrus2/sysadmin.html Cyrus SASL for System Administrators] - Claus Aémann's current docs
| |
− | | |
− | TLS/SSL and certificates:
| |
− | * [http://www.sendmail.org/~ca/email/starttls.html SMTP STARTTLS in sendmail/Secure Switch] - Claus Aémann's current docs, not all that helpful.
| |
− | * IBM's WebSphere MQ documentation has a great [http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/index.jsp?topic=/com.ibm.mq.csqzas.doc/sy10600_.htm general explanation of certificates]. Ignore the MQ-specific stuff.
| |
− | * [http://www.madboa.com/geek/openssl/ OpenSSL Command-Line HOWTO] - Paul Heinlein's invaluable doc
| |
− | | |
− | ==Anti-spam==
| |
− | | |
− | ===Enable a caching DNS server===
| |
− | FreeBSD 9 and lower comes with BIND preconfigured to be a caching DNS server listening on 127.0.0.1, but it is disabled by default. If you enable it, you'll reduce traffic to/from other DNS servers. You can also configure it to bypass your ISP's DNS server, if that's what you normally use, in order to use certain RBL services to combat spam (see next section). | |
− | | |
− | On FreeBSD 10 and up, the DNS server is called Unbound, and by default it is configured as a local caching resolver. See https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-dns.html for how to enable it.
| |
− | | |
− | On FreeBSD 9 and lower, with BIND:
| |
− | * Add <code>named_enable="YES"</code> to <code>/etc/rc.conf</code>
| |
− | * Uncomment the <code>forwarders</code> section of <code>/etc/named/named.conf</code> and put your ISP's nameserver addresses in it.
| |
− | * In <code>/etc/resolv.conf</code>, replace your ISP's nameserver addresses with 127.0.0.1 (or—and I haven't tested this—if you use DHCP, add <code>prepend domain-name-servers 127.0.0.1;</code> to the <code>/etc/dhclient.conf</code> section for your network interface; see the dhclient.conf man page).
| |
− | * <code>service named onestart</code>
| |
− | | |
− | Test it:
| |
− | | |
− | * <code>nslookup freebsd.org</code>
| |
− | | |
− | The first line of output should say <code>Server: 127.0.0.1</code> and the lookup should succeed.
| |
− | | |
− | At this point you are just forwarding; anytime you look up a host not yet in the cache, you are asking your ISP's nameserver to request it for you. It might pull it from its own cache.
| |
− | | |
− | === Support RBLs ===
| |
− | You are probably combatting spam by using RBLs, which rely on DNS queries to find out if a given IP is a suspected spammer.
| |
− | | |
− | Some RBL services block queries from the major ISPs, because they generate too much traffic. [http://www.uribl.com/about.shtml#abuse URIBL] is an example of such a service.
| |
− | | |
− | To deal with this, after enabling the caching & forwarding DNS service as described above, you now need to disable forwarding for just the RBL domains. Then your server will query those domains' DNS servers directly. It will work if you just add something like this to <code>named.conf</code> (then restart named):
| |
− | | |
− | <pre>/* Let RBLs see queries from me, rather than my ISP, by disabling forwarding for them: */
| |
− | | |
− | // RBLs that are disabled but mentioned in my sendmail config
| |
− | zone "blackholes.mail-abuse.org" { type forward; forward first; forwarders {}; };
| |
− | | |
− | // RBLs that are enabled in my sendmail config
| |
− | zone "bl.score.senderscore.com" { type forward; forward first; forwarders {}; };
| |
− | zone "zen.spamhaus.org" { type forward; forward first; forwarders {}; };
| |
− | | |
− | // RBLs that are probably enabled in SpamAssassin
| |
− | zone "multi.uribl.com" { type forward; forward first; forwarders {}; };
| |
− | zone "dnsbl.sorbs.net" { type forward; forward first; forwarders {}; };
| |
− | zone "combined.njabl.org" { type forward; forward first; forwarders {}; };
| |
− | zone "activationcode.r.mail-abuse.com" { type forward; forward first; forwarders {}; };
| |
− | zone "nonconfirm.mail-abuse.com" { type forward; forward first; forwarders {}; };
| |
− | zone "iadb.isipp.com" { type forward; forward first; forwarders {}; };
| |
− | zone "bl.spamcop.net" { type forward; forward first; forwarders {}; };
| |
− | zone "fulldom.rfc-ignorant.org" { type forward; forward first; forwarders {}; };
| |
− | zone "list.dnswl.org" { type forward; forward first; forwarders {}; };
| |
− | </pre>
| |
− | | |
− | ===Secondary and tertiary MX records===
| |
− | To have a place for your inbound mail to queue when your host is down, it's common to set up a secondary MX that stores-and-forwards. The downside is that it probably attracts a lot of spam which doesn't get caught because the secondary MX accepts all mail for your domain, and your host, when it comes back online, will accept all mail from that secondary.
| |
− | | |
− | One way to partially work around this problem is to make your primary MX host also be a tertiary MX. Some spammers will favor the tertiary, but real mailers will try the secondary first.
| |
− | | |
− | If the spammers get wise, you can try using a different hostname for the tertiary MX, so long as its A record points to the same IP.
| |
− | | |
− | ===Spamassassin===
| |
− | | |
− | It's tempting to run every piece of incoming mail through Spamassassin, but you don't want to block messages that "look spammy" such as bounces and mailing list traffic (especially the spamassassin users' mailing list). I haven't figured out how to do it right, so I am only running Spamassassin as a user, via procmail, and my .procmailrc is not running administrative messages (including bounces) and mailing list traffic through Spamassassin.
| |
− | | |
− | ====Enable DCC====
| |
− | DCC will score any bulk mail higher. This means legit mailing list posts will also be scored higher, so using it means you have to be vigilant about whitelisting or avoiding scanning mailing list traffic.
| |
− | | |
− | To enable DCC checking, just uncomment the appropriate line in /usr/local/etc/mail/spamassassin/v310.pre.
| |
− | | |
− | The feature requires allowing UDP traffic in & out on port 6277. See [http://www.rhyolite.com/dcc/FAQ.html and http://www.rhyolite.com/dcc/FAQ.html#firewall-ports2]. I didn't need to do anything special to enable this with my particular firewall configuration, but if I did, I would probably put an ipfw allow rule in /etc/rc.local.
| |
− | | |
− | ====Enable SPF...or not====
| |
− | SPF is for catching forged email. See [http://www.akadia.com/services/spf.html http://www.akadia.com/services/spf.html]. The idea is that email from a user at a particular domain will get a "pass" from the SPF checker if the mail comes from an IP address that the domain owner has approved via a special entry in their DNS records. Otherwise it gets a "fail" or "softfail" or whatever.
| |
− | | |
− | Getting a "pass" is worthless (Spamassassin score adjustment of zero) because so many spammers use custom domains that they control and set SPF records for. A "fail" is worth about 0.9. It's great for catching a certain kind of spam, as long as the domain owner keeps their SPF records updated and legitimate email from that domain always goes direct from the approved servers to the recipient's servers.
| |
− | | |
− | I've read several anti-SPF rants that seem to say there are other reasons SPF is "harmful," but they don't really explain the problems very well, and they don't seem to be based on empirical evidence of "harm."
| |
− | | |
− | Honestly, I very rarely get any SPF passes and even fewer fails. It's just wasting time to enable SPF checking in Spamassassin, so after enabling it for a while (in init.pre), I turned it off.
| |
− | | |
− | I look at SPF more as just protection for legitimate domains. Non-spam domains with SPF info in their DNS records are far less likely to be forged by spammers. So for my domain, I set up a TXT record that says "v=spf1 a mx -all". Now spammers are less likely to use my domain in the envelope sender address.
| |
− | | |
− | ==NTP==
| |
− | For things to run smoothly, especially email, you need to keep your system's clock (the one that keeps track of the actual date/time) in sync with the outside world.
| |
− | | |
− | ===stock/classic/reference/Unix ntpd===
| |
− | When setting up FreeBSD via sysinstall, you're asked to pick a server for '''ntpdate''' to use.[http://www.freebsd.org/doc/en/books/handbook/install-post.html#Ntpdate-config] This sets <code>ntpdate_hosts="..."</code> and <code>ntpdate_enable="YES"</code> in /etc/rc.conf, which causes /etc/rc.d/ntpdate to run at boot time to set the clock once, with immediate results. You're expected to make it run daily, if not more often, via a script or cron job.
| |
− | | |
− | But wait, '''ntpdate is deprecated'''! See its man page. You're now supposed to run '''ntpd''', which adjusts the time gradually, and can connect to remote NTP servers as often as it needs to.
| |
− | | |
− | Ideally, you have it running as a daemon, enabled via <code>ntpd_enable"YES"</code> in /etc/rc.conf. You could also or instead do a clock sync on-demand via <code>ntpd -q</code>, same as running <code>ntpdate</code>. Either way, it uses /etc/ntp.conf for its configuration and mainly just says what servers to check.
| |
− | | |
− | See below for a reason you may not want to run the daemon.
| |
− | | |
− | If you don't like running the daemon, just set up a root cron job to run <code>/usr/sbin/ntpd -q -x > /dev/null</code> every 4 hours or so.
| |
− | | |
− | Rudimentary instructions for getting ntpd as a daemon are [http://docs.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-ntp.html in the FreeBSD Handbook], but they don't cover security issues very well. In particular, you need this in your /etc/ntp.conf:
| |
− | | |
− | <pre># 2013-2014: close off hole that lets people use the server to DDoS
| |
− | #
| |
− | # 1. disable monitoring
| |
− | #
| |
− | disable monitor
| |
− | #
| |
− | # 2. before 'server' lines, use the following, as per
| |
− | # https://www.team-cymru.org/ReadingRoom/Templates/secure-ntp-template.html
| |
− | #
| |
− | # by default act only as a basic NTP client
| |
− | restrict -4 default nomodify nopeer noquery notrap
| |
− | restrict -6 default nomodify nopeer noquery notrap
| |
− | # allow NTP messages from the loopback address, useful for debugging
| |
− | restrict 127.0.0.1
| |
− | restrict ::1</pre>
| |
− | | |
− | The reason you need this is because '''this particular ntpd implementation listens on UDP port 123 all the time, exposing it to the outside world'''. It needs to keep that port open in order to work at all. You should try to reduce this exposure risk via <code>restrict</code> lines in ntp.conf; these can be used to say that only traffic purporting to be from certain hosts (the servers you want to get time info from) will be acknowledged. It wouldn't hurt to duplicate this info in your firewall rules. But I had bad luck with geographically nearby NTP servers going offline over the years, so I much prefer to use the pool.ntp.org hostnames as the servers to sync to. These pools, by nature, are always changing their IP addresses. Thus you can't use "restrict" lines or firewall rules to whitelist these IPs, because you don't know what they are. Therefore, it's better to not run the stock ntpd in daemon mode unless you only use static IPs in your ntp.conf <code>server</code> lines.
| |
− | | |
− | So instead of running stock ntpd, I run openntpd from the ports collection. It doesn't have this problem.
| |
− | | |
− | ===OpenNTPD===
| |
− | After searching in vain for a way to use the pools securely, I gave up and decided to run '''openntpd''' from ports. This is much, much simpler.
| |
− | | |
− | * <code>portmaster net/openntpd</code>
| |
− | * In /etc/rc.conf:
| |
− | | |
− | <pre>ntpd_enable="NO"
| |
− | openntpd_enable="YES"
| |
− | openntpd_flags="-s"</pre>
| |
− | | |
− | * You can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).
| |
− | | |
− | * Logging is same as for the stock ntpd; just put this in /etc/syslog.conf:
| |
− | | |
− | <pre>ntp.* /var/log/ntpd.log</pre>
| |
− | | |
− | * <code>touch /var/log/ntpd.log</code>
| |
− | * <code>service syslogd reload</code>
| |
− | | |
− | * Log rotation is probably desirable. Put this in /etc/newsyslog.conf:
| |
− | | |
− | <pre>/var/log/ntpd.log 644 3 * @T00 JCN</pre>
| |
− | | |
− | * <code>service ntpd stop</code> (obviously not necessary if you weren't running the stock ntpd before)
| |
− | * <code>service openntpd start</code>
| |
− | | |
− | You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:
| |
− | | |
− | <pre>ntp engine ready
| |
− | set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s)
| |
− | peer x.x.x.x now valid
| |
− | adjusting local clock by -0.046633s
| |
− | </pre>
| |
− | | |
− | ==Spamassassin config==
| |
− | See above re:
| |
− | * [[#Update SpamAssassin and related|updating Spamassassin]], which sometimes involves fixing things that break
| |
− | * [[#sa-utils|setting up sa-utils]] for daily ruleset maintenance, and using the "sought" ruleset
| |
− | * [[#Enable a caching DNS server|enabling a caching, non-forwarding DNS server]] so RBL checks work
| |
− | | |
− | Here are some notes about the rest of my Spamassassin config.
| |
− | | |
− | ===v320.pre===
| |
− | There are a bunch of plugins that come with Spamassassin. Many are enabled by default via <code>loadplugin</code> lines in the various *.pre files. I enabled a couple more by uncommenting some more <code>loadplugin</code> lines in /usr/local/etc/mail/spamassassin/v320.pre.
| |
− | | |
− | This one is what allows the shortcircuit rules to work:
| |
− | <pre>loadplugin Mail::SpamAssassin::Plugin::Shortcircuit</pre>
| |
− | | |
− | ...You also have to create shortcircuit.cf; see below.
| |
− | | |
− | This one is an optimization to compile rules to native code:
| |
− | <pre>loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody</pre>
| |
− | | |
− | ===shortcircuit.cf===
| |
− | Some basic rules for the Shortcircuit plugin come with SpamAssassin. These rules can be extended by using the [https://wiki.apache.org/spamassassin/ShortcircuitingRuleset sample Shortcircuiting Ruleset] in the SA wiki.
| |
− | | |
− | ===spamc.conf===
| |
− | I feel it's a good idea to avoid scanning extremely large messages. Yes, this gives spammers a back door, but scanning incoming email shouldn't be something that cripples the server. If I had a faster box with more RAM, I would set this limit much higher.
| |
− | | |
− | <pre># max message size for scanning = 600k
| |
− | -s 600000
| |
− | </pre>
| |
− | | |
− | ===local.cf===
| |
− | I want suspected spam to be delivered to users as regular messages, not as attachments to a Spamassassin report:
| |
− | <pre>report_safe 0</pre>
| |
− | | |
− | If a message matches the whitelists, just deliver it without doing a full scan:
| |
− | <pre>shortcircuit USER_IN_WHITELIST on
| |
− | shortcircuit USER_IN_DEF_WHITELIST on
| |
− | shortcircuit USER_IN_ALL_SPAM_TO on
| |
− | shortcircuit SUBJECT_IN_WHITELIST on</pre>
| |
− | | |
− | Likewise, if a message matches the blacklists, just call it spam:
| |
− | <pre>shortcircuit USER_IN_BLACKLIST on
| |
− | shortcircuit USER_IN_BLACKLIST_TO on
| |
− | shortcircuit SUBJECT_IN_BLACKLIST on</pre>
| |
− | | |
− | I've never seen BAYES_00 or BAYES_99 mail that was misclassified, so avoid a full scan on that as well:
| |
− | <pre>shortcircuit BAYES_99 spam
| |
− | shortcircuit BAYES_00 ham</pre>
| |
− | | |
− | My users get to have their own ~/.spamassassin/user_prefs files:
| |
− | <pre>allow_user_rules 1</pre>
| |
− | | |
− | My users probably aren't sending out spam to other users on my system:
| |
− | <pre># probably not spam if it originates here (default score 0)
| |
− | score NO_RELAYS 0 -5 0 -5</pre>
| |
− | | |
− | Custom rule: among my users (mainly me), I believe a message with a <code>List-Id</code> header is slightly less likely to be spam:
| |
− | <pre>header FROM_MAILING_LIST exists:List-Id
| |
− | score FROM_MAILING_LIST -0.1</pre>
| |
− | | |
− | Custom rule: a message purporting to be from a mailing list run by my former employer is much less likely to be spam:
| |
− | <pre>header FOURTHOUGHT_LIST List-Id =~ /<[^.]+\.[^.]+\.fourthought\.com>/
| |
− | score FOURTHOUGHT_LIST -5.0</pre>
| |
− | | |
− | Custom rule: a message from an IP resolving to anything.ebay.com can be whitelisted:
| |
− | <pre># maybe not ideal, but at one point I missed some legit eBay mail
| |
− | whitelist_from_rcvd *.ebay.com ebay.com</pre>
| |
− | | |
− | I realize these custom rules could easily let spam through, but I was desperate to avoid false positives, which I was getting when using the AWL (Auto-WhiteList plugin), which despite copious training was making a lot of ham score as spam. AWL is no longer enabled in SpamAssassin by default, and I sure as hell am not using it ever again. So I probably don't need these rules anymore. I leave them in, though, because they remind me how to set up this kind of thing.
| |
− | | |
− | Before I [[#Enable a caching DNS server|enabled a caching, non-forwarding DNS server]], the URIBL rules weren't working, so I had to disable the lookups by setting the URIBL scores to zero. Since I set up the non-forwarding DNS server, my URIBL queries are coming from my own IP rather than my ISP's DNS servers, so it works properly. Therefore, I've got this commented out now; it's just here for future reference:
| |
− | <pre>#score URIBL_BLACK 0
| |
− | #score URIBL_RED 0
| |
− | #score URIBL_GREY 0
| |
− | #score URIBL_BLOCKED 0</pre>
| |
− | | |
− | Bounces generated by my own MTA for mail that originates on my network will get scored lower (i.e., more likely to be ham) due to the NO_RELAYS rule. Without additional configuration, though, any bounces generated by remote MTAs, regardless of whether it's for mail originating on my network or originating elsewhere, will not be recognized or handled differently than any other inbound mail. Remotely generated bounces for mail originating elsewhere is called '''backscatter''' and is not actually spam, although it often does contain spam or viruses, and is generally unwanted.
| |
− | | |
− | In order to distinguish bounces from regular mail, and to distinguish the bounces for mail originating here from backscatter (not really score it differently, by default), I need to activate the VBounce plugin. This plugin is already enabled in v320.pre, but it doesn't actually do anything until it is told what the valid relays are for local outbound mail. So here I tell it what to look for in the Received headers to know that it's a bounce for mail that originated from my network:
| |
− | <pre>whitelist_bounce_relays chilled.skew.org</pre>
| |
− | | |
− | Bounces should then hit the ANY_BOUNCE_MESSAGE rule plus one of these:
| |
− | * BOUNCE_MESSAGE = MTA bounce message
| |
− | * CHALLENGE_RESPONSE = Challenge-Response message for mail you sent
| |
− | * CRBOUNCE_MESSAGE = Challenge-Response bounce message
| |
− | * VBOUNCE_MESSAGE = Virus-scanner bounce message
| |
− | | |
− | You can customize your scoring for these if you want, or in your .procmailrc you can specially handle scanned mail with these tags appearing in the <code>X-Spam-Status</code> header. However, I thought I shouldn't be sending obvious bounces to Spamassassin at all...hmm.
| |
− | | |
− | ===Personal user_prefs===
| |
− | After saving and separating my ham and spam for a couple of months, then looking at the scores, I'm pretty confident that ham addressed to me is very unlikely to score much higher than 3, so I lowered the spam threshold from 5 to 4:
| |
− | <pre>require_hits 4</pre>
| |
− | | |
− | Similarly, I'm finding ham addressed to me is very unlikely to be in the BAYES_50_BODY to BAYES_99_BODY range, so I bump those scores up a bit:
| |
− | <pre># defaults for the following are 0.001, 1.0, 2.0, 3.0, 3.5
| |
− | score BAYES_50_BODY 2.0
| |
− | score BAYES_60_BODY 2.5
| |
− | score BAYES_80_BODY 3.0
| |
− | score BAYES_95_BODY 4.0
| |
− | score BAYES_99_BODY 4.5</pre>
| |
− | | |
− | I thought the default score for a Spamcop hit was pretty low, so I bumped it up:
| |
− | <pre># default for the following is 1.3, as of January 2014
| |
− | score RCVD_IN_BL_SPAMCOP_NET 3.0</pre>
| |
− | | |
− | (I already have my MTA checking Spamcop, but it only looks at the IP connecting to me, so it lets through spam that originated at a Spamcop-flagged IP but that was relayed through a non-flagged intermediary.)
| |
− | | |
− | Remember the down-scoring I do for mailing lists in the site config? Well, if that mailing list traffic is addressed to me, I want to score it even lower:
| |
− | <pre>score FOURTHOUGHT_LIST -100.0
| |
− | score FROM_MAILING_LIST -1.0</pre>
| |
− | | |
− | I also have a bunch of <code>whitelist_from</code> entries for my personal contacts.
| |
− | | |
− | Finally, I want a Spamassassin report added to the headers of every message I get, so I know why it scored as it did:
| |
− | <pre>add_header all Report _REPORT_</pre>
| |
− | | |
− | ==Git==
| |
− | I already have git installed on a different host, so this is more just my notes on how to use it.
| |
− | | |
− | ===Initial setup===
| |
− | This creates ~/.gitconfig and populates it with reasonable defaults (but set user.name and user.email to real values; I made mine match what I use on GitHub, for consistency):
| |
− | <pre>git config --global user.name "yourname"
| |
− | git config --global user.email "youremail"
| |
− | git config --global core.excludesfile ~/.gitignore
| |
− | git config --global core.autocrlf input
| |
− | git config --global core.safecrlf true
| |
− | git config --global push.default simple
| |
− | git config --global branch.autosetuprebase always
| |
− | git config --global color.ui true
| |
− | git config --global color.status auto
| |
− | git config --global color.branch auto
| |
− | </pre>
| |
− | | |
− | Create a ~/.gitignore and tell it what file globs to ignore (so they won't be treated as part of your project):
| |
− | <pre># ignore files ending with .old, .orig, or ~
| |
− | *.old
| |
− | *.orig
| |
− | *~
| |
− | </pre>
| |
− | | |
− | Create a place for your repos:
| |
− | * <code>mkdir ~/git_repos</code>
| |
− | | |
− | ===Use a separate SSH keypair for GitHub===
| |
− | You don't have to use your main SSH identity for GitHub.
| |
− | * Generate a new keypair: <code>ssh-keygen -t dsa -C "you@yourhost.com"</code>
| |
− | * When prompted for a file in which to save the key, make it create a new file: <code>~/.ssh/id_dsa_github</code>
| |
− | * Set a passphrase when prompted.
| |
− | * Copy-paste the content of <code>~/.ssh/id_dsa_github</code> into the SSH keys section of your settings on GitHub.
| |
− | * In your <code>~/ssh/config</code>, add this:
| |
− | <pre>Host github.com
| |
− | IdentityFile ~/.ssh/id_dsa_github
| |
− | </pre>
| |
− | * See if it works: <code>ssh -T git@github.com</code>
| |
− | You should get a message that you've successfully authenticated.
| |
− | | |
− | ==Customizations==
| |
− | Here are some of my favorite customizations.
| |
− | | |
− | ===Things in root's crontab===
| |
− | This is not a complete list, of course.
| |
− | <pre># every 5 minutes, run mrtg to update the network traffic graphs
| |
− | */5 * * * * env LANG=C /usr/local/bin/mrtg /usr/local/etc/mrtg/mrtg.cfg
| |
− | | |
− | # on the 8th day of every month, update the GeoIP databases
| |
− | 50 0 8 * * /usr/local/bin/geoipupdate.sh > /dev/null 2>&1
| |
− | | |
− | # every hour, clear out the PHP session cache
| |
− | 10 * * * * /usr/local/adm/clean_up_php_sessions > /dev/null 2>&1
| |
− | </pre>
| |
− | | |
− | ===Things in my crontab===
| |
− | This is not a complete list, either.
| |
− | <pre># nightly learning of spam misfiled as ham by SpamAssassin (I put it in ~/mail/notham)
| |
− | 35 04 * * * [ -s /home/mike/mail/notham ] && /usr/local/bin/sa-learn --spam --mbox /home/mike/mail/notham > /dev/null 2>&1 && rm /home/mike/mail/notham
| |
− | </pre>
| |
− | | |
− | ===/usr/local/adm/clean_up_php_sessions===
| |
− | PHP defaults to storing sessions in /tmp or /var/tmp, and has a 1 in 1000 chance of running a garbage collector upon the creation of a new session. The garbage collector will expire ones that are more than 24 minutes old. You can increase the probability of it running, but still you have to wait for a new session to be created, so it's really only useful for sites which get a new session created every 24 minutes or less. Otherwise, you're better off (IMHO) just running a script to clean out the stale session files. I am using the script below, invoked from root's crontab every 20 minutes:
| |
− | | |
− | <pre>#!/bin/sh
| |
− | echo "Deleting the following stale sess_* files:"
| |
− | find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc)
| |
− | find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc) -delete
| |
− | </pre>
| |
− | | |
− | Of course you can store session data in a database if you want, and the stale file problem is avoided altogether. But then that's just one more thing that can break.
| |
− | | |
− | ===/etc/periodic.conf===
| |
− | After installing the sa-utils port:
| |
− | * <code>daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org"</code>
| |
− | * <code>daily_sa_quiet="yes"</code>
| |
− | | |
− | To ensure verbose output of the daily run of "pkg audit" (so you can see the vulnerability details):
| |
− | * <code>daily_status_security_pkgaudit_quiet="NO"</code>
| |
− | | |
− | ===/etc/ssh/sshd_config===
| |
− | These affect the behavior of the SSH server.
| |
− | * <code>Port #####</code> - Change the listening port from 22 to something else! Eliminates brute-force attacks.
| |
− | * <code>GatewayPorts yes</code> - Enable public access to reverse tunnels.
| |
− | * <code>ClientAliveInterval 30</code> - Every 30 seconds, check for client response.
| |
− | * <code>ClientAliveCountMax 99999</code> - Don't disconnect an unresponsive client until 99999 checks fail.
| |
− | | |
− | ===~/.ssh/config===
| |
− | These are settings to use when connecting with the ssh client to remote hosts (replace ###### as appropriate):
| |
− | <pre>CheckHostIP yes
| |
− | Compression yes
| |
− | Host my.otherhost.com
| |
− | Port #####
| |
− | Host github.com
| |
− | IdentityFile ~/.ssh/id_dsa_github
| |
− | </pre>
| |
− | | |
− | ===/etc/sysctl.conf===
| |
− | These are changes to default kernel settings in multi-user mode.
| |
− | * <code>net.inet.tcp.keepidle=540000</code> - Probably no longer necessary if using the sshd_config customizations above, but just in case, every 9 minutes (instead of every 2 hours), send something to every TCP client, so crappy routers between us and them don't think we've disconnected. I used this because I found that some routers had a 10-minute connection timeout, which kept killing my SSH sessions and tunnels.
| |
− | | |
− | ===/etc/make.conf===
| |
− | These are extra environment variables enabled during 'make' runs, and usually are specially checked-for by the Makefiles in the FreeBSD ports.
| |
− | | |
− | <pre>##
| |
− | ## options for 'make buildworld' and components thereof:
| |
− | ##
| |
− | # when building top(1), only allocate enough space to handle 75 users, rather than 10000
| |
− | TOP_TABLE_SIZE=151
| |
− | # for code with processor-specific optimizations (e.g. base OpenSSL), optimize for my Pentium III CPU (SSE+MMX)
| |
− | CPUTYPE?= pentium3
| |
− | # when building sendmail(1), enable STARTTLS support (requires security/cyrus-sasl2 port and additional configuration)
| |
− | SENDMAIL_CFLAGS=-I/usr/local/include/sasl -DSASL
| |
− | SENDMAIL_LDFLAGS=-L/usr/local/lib
| |
− | SENDMAIL_LDADD=-lsasl2
| |
− | # I don't remember why the next two lines got commented out!
| |
− | #SENDMAIL_MC= /etc/mail/chilled.skew.org.mc
| |
− | #SENDMAIL_SUBMIT_MC= /etc/mail/chilled.skew.org.submit.mc
| |
− | | |
− | ##
| |
− | ## options for building ports:
| |
− | ##
| |
− | # I am using the new package system (required now)
| |
− | WITH_PKGNG=yes
| |
− | | |
− | # my ancient network card does not support IPv6, so don't bother with IPv6 in networking ports
| |
− | WITHOUT_IPV6=yes
| |
− | | |
− | # networking ports like curl(1) should support HTTPS
| |
− | WITH_HTTPS=yes
| |
− | | |
− | # don't build or install GUIs, including X11 libraries
| |
− | WITHOUT_GUI=yes
| |
− | WITHOUT_X11=yes
| |
− | OPTIONS_UNSET=X11
| |
− | | |
− | # don't waste time on tests when building ImageMagick
| |
− | WITHOUT_IMAGEMAGICK_TESTS=yes
| |
− | | |
− | # when building FreeType, enable subpixel rendering capability (disabled by default due to patent issues)
| |
− | WITH_LCD_FILTERING=yes
| |
− | | |
− | # Berkeley DB 5 was the highest version supported by devel/apr1 (Apache dependency) in mid-2014.
| |
− | # This can be removed if db6 is installed (but the apr1 port will not install it for you).
| |
− | WITH_BDB_VER=5
| |
− | | |
− | # As required by the /usr/ports/UPDATING entry 20141209:
| |
− | # ensure Linux ports use emulators/linux_base-c6 (CentOS userland), not linux_base-f10 (Fedora 10, unsupported)
| |
− | OVERRIDE_LINUX_BASE_PORT=c6
| |
− | OVERRIDE_LINUX_NONBASE_PORTS=c6
| |
− | </pre>
| |
− | | |
− | ===/etc/syslog.conf===
| |
− | Anything going to /dev/console should also go to a regular file:
| |
− | <pre>console.* /var/log/console.log</pre>
| |
− | | |
− | If logged in, some users get important messages in their ttys:
| |
− | <pre>!-sm-mta
| |
− | *.notice root,mike
| |
− | !sm-mta
| |
− | *.warning root,mike
| |
− | !*</pre>
| |
− | | |
− | ===/etc/rc.local===
| |
− | | |
− | Here is a bare-bones /etc/rc.local which does nothing:
| |
− | | |
− | <pre>#!/bin/sh
| |
− | #
| |
− | # This file is a deprecated but convenient method of launching additional
| |
− | # "local daemons" (or just running any other startup tasks) at the very
| |
− | # end of the boot process. See the rc(8) manual page.
| |
− | #
| |
− | | |
− | # load variables from rc.conf (comment out if not needed)
| |
− | #
| |
− | #if [ -z "${source_rc_confs_defined}" ]; then
| |
− | # if [ -r /etc/defaults/rc.conf ]; then
| |
− | # . /etc/defaults/rc.conf
| |
− | # source_rc_confs
| |
− | # elif [ -r /etc/rc.conf ]; then
| |
− | # . /etc/rc.conf
| |
− | # fi
| |
− | #fi
| |
− | </pre>
| |
− | | |
− | It runs at the end of the boot process to load any custom daemons and to run anything else you want. Its output is prefaced with "Starting additional daemons: " though, so you want to keep its output to a minimal list, and all on one line if possible. For example:
| |
− | | |
− | <pre># load additional firewall rules
| |
− | rules="/etc/ipfw.rules"
| |
− | [ -f $rules ] && echo -n " $rules" && . $rules
| |
− | | |
− | # make encrypted swap file
| |
− | mkswap="/etc/mkswap.sh"
| |
− | [ -f $mkswap ] && echo -n " $mkswap" && . $mkswap</pre>
| |
− | | |
− | ===~/.cshrc===
| |
− | | |
− | See my [[User:Mjb/tcsh configuration files#.cshrc|tcsh configuration files]] document.
| |
− | | |
− | ===~/.login===
| |
− | | |
− | See my [[User:Mjb/tcsh configuration files#.login|tcsh configuration files]] document.
| |
− | | |
− | === nano configuration files ===
| |
− | | |
− | See my [[User:Mjb/nano configuration files|nano configuration files]] document.
| |