User:Mjb/FreeBSD on BeagleBone Black/Additional software

From Offset
< User:Mjb‎ | FreeBSD on BeagleBone Black
Revision as of 09:18, 16 April 2020 by Mjb (talk | contribs) (Secure it)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

This is a continuation of my FreeBSD on BeagleBone Black notes. Any questions/comments, email me directly at root (at) skew.org.

Some of my echo commands require support for \n, e.g. by setting setenv ECHO_STYLE both in tcsh.

Contents

Ports and packages management

On Windows, Mac OS, iOS, and Android, third-party software normally comes in the form of a "self-extracting installer" app whose sole purpose is to show you a license agreement and install all the components of the software, probably removing or cleaning up old versions too. On Linux distros, third-party software is normally installed by using a "package manager" to install pre-built software packages; this is basically a generic installer, and you tell it where to get the pre-built software you want, or it looks up the name in a centralized index to figure out where to get it. Some software does not come packaged, though; you must build it from source code and let it install itself. Although it is much slower than installing a pre-built package, the resulting code can be better optimized for your system.

FreeBSD supports packages, too, installed via the simple package manager pkg. But the BSD OSes are somewhat unique in that the most popular way of installing software is via "ports", which is a little of both. Using ports, the source code is normally fetched, patched, and built locally, usually just with one command, although you may be prompted first to select some build-time options, often just to avoid building unnecessary dependencies. On FreeBSD, the installation is performed natively via the just-built source code, so the port's patches are often just making sure that the native installer puts things in the standard places for FreeBSD.

The first port to install on my systems is always portmaster, which further simplifies installing from ports, allowing for packages to be used instead, and also building an optional package from the already-installed files, in case you need to reinstall. (In contrast, on OpenBSD, the files are installed by a package which is always created after the build.)

Using pkg

ARMv6/ARMv7 is now supported by the official FreeBSD package repository, so a lot of things can be directly installed and upgraded via commands like this:

  • pkg install nano
  • pkg upgrade nano

When it works, it's super fast, way better than building from source. But there are drawbacks:

  • All packages are built with default options. If you would normally customize a port's options in the "make config" step, you're out of luck.
  • The packages are built with specific dependencies. During an upgrade, some major dependencies (Perl, in particular) can end up being downgraded or have two different versions installed, if you are not careful.

In both cases, you have to go back to upgrading via the ports collection. Another option is to build (probably cross-compile) your own package repository on a faster system on your own schedule, rather than using the official repository.

Useful portmaster flags

Some of the most useful flags for portmaster:

  • -d will make it delete old distfiles after each port is installed, rather than asking you about it. (-D would make it keep them.)
  • -b will make it keep the backup it made when installing the previous version of a port. It usually deletes the backup after successfully installing a new version.
  • -x pattern will make it exclude ports (including dependencies) that match the glob pattern, unless there is a minimum version requirement which would be violated. You can't have more than one of these, though, so to exclude ports whose names cannot be reduced to a single glob, use -i to force interactive mode, which is where it asks you to confirm every port and dependency as they come up.
  • --update-if-newer will prevent rebuilding/reinstalling ports that don't need it. But for some reason, you have to specify more than one port on the command-line for this to work.
  • -r portname will rebuild portname and all ports that depend on it. This is for when sub-dependencies have been updated. For example, icecast2 requires libxslt, which requires libgcrypt. If you just tell portmaster to update or rebuild icecast2, it won't rebuild an already-up-to-date libxslt just to pick up a new version of libgcrypt. So to get the new libgcrypt into libxslt, you need to run portmaster -r libgcrypt.
  • -P will try to use pkg to just install a package if available.

Here's an example (to update Perl modules, and Perl if needed):

  • portmaster -b -d --update-if-newer --packages p5-

If you're going to be using some of these flags all the time, just put them in your /usr/local/etc/portmaster.rc (see the portmaster.rc.sample there for a template). Mine has these lines uncommented:

BACKUP=bopt
DONT_SCRUB_DISTFILES=Dopt
SAVE_SHARED=wopt
PM_LOG=/var/log/portmaster.log

Using portmaster with the -P or -PP options or their equivalents in portmaster.rc is still something you can really only do if you are absolutely sure the packages will all be the versions you want, which is probably not the case, if you are using the standard repository. However, it should be safe to uncomment the PM_PACKAGES_BUILD=pmp_build line in /usr/local/etc/portmaster.rc. You can also force certain packages to never be installed from the standard repository by adding a PT_NO_INSTALL_PACKAGE line to /etc/make.conf or /usr/local/etc/ports.conf ... this feature is not very well documented (I asked about it in the forum), but I think it works like this: PT_NO_INSTALL_PACKAGE=www/nginx www/tt-rss

List installed packages

  • pkg info – names & descriptions
  • pkg info -aoq | sort – categories & names

Find packages by name

  • pkg info – list all installed packages (you can grep the results)
  • pkg info -g foo – list installed packages matching glob pattern foo
  • pkg info -x foo – list installed packages matching regex foo
  • pkg search foo – list available packages matching glob pattern foo
  • pkg search -x foo – list available packages matching regex foo

Show dependencies

  • pkg info -r foo – lists packages with runtime dependencies on the foo package.
  • pkg info -d foo – lists packages which foo depends on at runtime (non-recursive).

Only runtime dependencies are tracked by the package database. Build dependency info is in the ports collection.

To see what ports are needed to build, test, package, or run foo:

  • cd /usr/ports/`pkg info -oq foo` && make all-depends-list && cd -

You can also get all of these dependency lists from FreshPorts as well.

There's no way to easily see a complete list of just the build dependencies. You can use build-depends-list instead of all-depends-list, but it will not search the dependencies recursively.

The hardest question to answer is "do I need foo for any of my installed packages?". For this you need to see if foo is in each package's all-depends-list. This will take a lot of time. Here's a script which will do it. I call it whatneeds, as in whatneeds openssl (my creation, CC0 license):

#!/bin/sh
[ ! "$1" ] && echo "Usage: $0 portname" && exit 99
port=`pkg info -oq $1`
[ ! "$port" ] && echo "$1 doesn't seem to be a port." && exit 99
sp='/-\|'
echo "By default, $1 is required by these ports (or a dependency thereof):"
pkg info -aoq | sort | while read x
do
    printf '\b%.1s' "$sp"
    sp=${sp#?}${sp%???}
    [ -d "/usr/ports/$x" ] && cd "/usr/ports/$x" || echo -e "\b[could not check $x]"
    make all-depends-list 2> /dev/null | fgrep -q "$port" && echo -e "\b$x"
done
echo -e "\b\c"

Check integrity of installed packages

portmaster

  • portmaster -v --check-port-dbdir — offers to delete saved options for ports no longer installed
  • portmaster -v --check-depends — makes sure installed ports' dependency info is consistent

pkg check

  • pkg check -d -n – checks package manifests for .so files, and reports if any are missing or don't pass cursory checks for validity. Those which aren't fully valid are reported as missing but are usually fine, and you can't do anything about their validity anyway, so this command is rather useless at the moment.

pkg_libchk

If you install the sysutils/bsdadminscripts port, you can run pkg_libchk to check for missing libraries. It even tells you which packages are affected.

libchk

If you install the sysutils/libchk port (which requires Ruby, which is huge), you can run libchk to check for missing libraries, check for unused libraries, and see exactly which binaries use each library. To figure out which port installed the file needing the library, you need to run pkg info -W /path/to/the/file.

See which installed packages could be updated

Always rebuild all installed packages as soon as possible after updating the OS.

At any other time:

  • pkg audit will tell you which installed packages have security vulnerabilities.
  • pkg version -P -v -l "<" will tell you what installed packages could be upgraded from the ports collection. It's slow.
  • pkg version -v -l "<" will tell you what installed packages could be upgraded from the packages collection. It's fast.

The upgrade info is based on the info in /usr/ports, not by seeing what's new online.

Some ports will just have a portrevision bump due to changes in the port's Makefile. These are usually unimportant and not worth the pain of rebuilding and reinstalling.

See what has changed in a particular port

To see what's new in a port, I typically just visit FreshPorts in a web browser. For example, https://www.freshports.org/mail/spamassassin has everything you could want to know about mail/spamassassin, including the commit history, which should tell you what's new in the port itself, and often this will include some info about the software that the port installs. Sometimes that's not enough, and you have to also go look at the software's changelog on some other website for details.

Is there a better way?

Conveniences

Install nano

I prefer to use a 'visual' text editor with familiar command keys, multi-line cut & paste, and regex search & replace. I never got the hang of the classic editor vi, I find emacs too complicated, and ee is too limited. I used pico for many years, and now use nano, which is essentially a pico clone with more features.

Pretty much anytime I run portmaster to install or upgrade something, I actually run portmaster -D so it doesn't prompt me at the end about keeping the distfiles.
  • portmaster editors/nano

See my nano configuration files document for configuration info.

Install Perl libwww

I like to use the HEAD and GET commands from time to time, to diagnose HTTP problems. These are part of Perl's libwww module, which is installed by other ports like Spamassassin. Those commands are nice to have anyway, so I like to install them right away:

  • portmaster www/p5-libwww

This will install a bunch of other Perl modules as dependencies.

Build 'locate' database

Why wait for this to run on Sunday night? Do it now so the locate command will work:

  • /etc/periodic/weekly/310.locate

OpenSSL config

It helps to know where your TLS (encrypted networking) tools and configuration files are.

The base system comes with OpenSSL libraries and tools.

Rather than letting ports use the base system's OpenSSL libs, I recommend installing security/openssl or security/libressl (a more efficient OpenSSL fork) from the ports collection. You will have to add a line to /etc/make.conf to enforce the use of this port.

Base

  • Command-line tool = /usr/bin/openssl
  • Configuration file = /etc/ssl/openssl.cnf (11 KB)

LibreSSL port

  • Command-line tool = /usr/local/bin/openssl
  • Configuration file = /usr/local/etc/ssl/openssl.cnf (less than 1 KB)

When building the port, there are only a couple of 'make config' options: one for installing manpages and one for installing a TLS-enabled netcat. It doesn't matter what you choose.

OpenSSL port

  • Command-line tool = /usr/local/bin/openssl
  • Configuration file = /usr/local/openssl/openssl.cnf (not created by default)

I suggest copying the sample file provided by this port:

  • cd /usr/local/openssl && cp openssl.cnf.sample openssl.cnf

I no longer use this port, but when I did, I made sure to leave certain 'make config' options disabled:

  • SSE2 (it is for Intel Pentium 4 & newer CPUs only)
  • insecure protocols: SSL2, SSL3, MD2

I enabled RC5 (patent issues are not a concern for me), and left SHARED and THREADS enabled as well.

Replacement services

Optional: Install OpenNTPD

Instead of the stock ntpd, I briefly used OpenNTPD because it's slightly easier to configure and safer to update.

I also was perhaps a bit overly paranoid about the stock ntpd's requirement of always listening to UDP port 123.

  • portmaster net/openntpd
  • In /etc/rc.conf:
ntpd_enable="NO"
openntpd_enable="YES"
openntpd_flags="-s"

If you like, you can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).

Logging is same as for the stock ntpd.

  • service ntpd stop (obviously not necessary if you weren't running the stock ntpd before)
  • service openntpd start

You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:

ntp engine ready
set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s)
peer x.x.x.x now valid
adjusting local clock by -0.046633s

Because of the issues with Unbound needing accurate time before it can resolve anything, I am going to experiment with putting time.nist.gov's IP address in /etc/hosts as the local alias 'timenistgov':

timenistgov 128.138.141.172

...and then have that be the first server checked in /usr/local/etc/ntpd.conf:

server timenistgov
servers pool.ntp.org

The hope is that the IP address will suffice when DNS is failing!

Later, I can set up a script to try to keep the timenistgov entry in /etc/hosts up-to-date. Of course, this will not help if they ever change the IP address while the BBB is offline.

Optional: alternative mail delivery agent

FreeBSD still ships with Sendmail as the default Mail Transport Agent (MTA). Sendmail is basically the "reference implementation" of SMTP and related mail services. Many years ago it had some major security holes, and it still has a bad reputation for being difficult to configure. For basic mail service, though, it really isn't difficult to set up, and it does work very well. Nevertheless, for peace of mind and ease of maintenance, most sysadmins nowadays run a replacement MTA like Postfix or Exim. The problem with those is they can be memory hogs, not good for a limited-RAM environment like the BeagleBone Black.

In hopes that it would be secure and lightweight, I really wanted to try OpenSMTPD from the OpenBSD project, but when I first attempted it in 2015, there were some bugs which prevented it from working on the BBB. The bugs were eventually fixed, but I needed mail service immediately, so I just set up Sendmail and have been using it ever since. If I get around to trying OpenSMTPD again, I will put more info here.

Sendmail setup

You can skip this if you installed an alternative MTA.

I am writing this long after the fact, so some steps are probably missing.

The snapshots for the BBB come with the Sendmail daemon disabled in /etc/rc.conf, so immediately some emails (from root to root) start plugging up the queue, as you can see in /var/log/maillog. (Something interesting: from the messages in /var/log/maillog about missing /etc/mail/certs, it looks like the client supports STARTTLS without having to be custom-built with SASL2 like I had to do in FreeBSD 8. Not sure what's up with that.)

Assuming you have set a FQDN (hostname.domain.topleveldomain with no trailing period) as your hostname in /etc/rc.conf, you can create your Sendmail config files like this:

  • cd /etc/mail && make all

Now you will have a .mc file just for your hostname. In my case it is chilled.skew.org.mc. Edit this file to tweak how the server processes inbound mail. Most likely you will want to accept mail without the FQDN as well (i.e. root@skew.org should work, not just root@chilled.skew.org), so you do that like this:

MASQUERADE_AS(skew.org)
FEATURE(masquerade_envelope)

And then for basic spam protection, I also added these lines:

FEATURE(`enhdnsbl', `bl.score.senderscore.com', `"550 Mail refused - see https://www.senderscore.org/lookup.php?lookup=$&{client_addr}', `t')dnl
FEATURE(`enhdnsbl', `zen.spamhaus.org', `"550 Mail refused - see http://www.spamhaus.org/query/bl?ip="$&{client_addr}', `t')dnl
FEATURE(`enhdnsbl', `b.barracudacentral.org', `"550 Mail refused - see http://www.barracudacentral.org/reputation?r=1&ip="$&{client_addr}', `t')dnl

The `t' means that if the lookup doesn't work (e.g., because the RBL service is down), instead of accepting the mail, it will be deferred with a temporary rejection message like "451 Temporary lookup failure of 127.0.0.1 at zen.spamhaus.org".

You might see some tutorials that say to use dnsbl instead of enhdnsbl. Usually it doesn't matter. enhdnsbl just supports an optional final argument: an IP address to look for in the response. (Some RBLs respond with a variety of IP addresses to represent different types or levels of suspicion.)

You might also see some tutorials that say to use code 554 instead of 550. I have read that 550 is better because 554 is for temporary DNS failures and is not actually allowed as a response to the MAIL command.

As for my choice of RBLS, SenderScore and Spamhaus are free and don't require any kind of registration unless you're commercial or high-volume. bl.score.senderscore.com does not block a lot of spam (and I heard via an insider that this company is actually in the spam business), but it should just work because it accepts queries from anywhere. zen.spamhaus.org blocks queries forwarded through the DNS servers of major ISPs, so will fail to block anything unless you use a DNS server which doesn't forward queries for that domain. BarracudaCentral supposedly requires registration of your DNS server...I filled out the form and it seems to be working for me, even though I didn't get an explicit approval message from them. Oddly, SpamAssassin queries bb.barracudacentral.org, which requires no registration. If I confirm b.barracudacentral.org works at the MTA level, then I will look into disabling the SpamAssassin check.

These might be a good idea, too (unconfirmed):

FEATURE(`greet_pause', 5000)dnl
define(`confBAD_RCPT_THROTTLE', `1')dnl

The greet_pause line delays the SMTP greeting by 5 seconds, which hopefully is too long for some spammers. The BAD_RCPT_THROTTLE will insert a delay after someone tries to send to 1 bogus or outdated email addresses.

To ensure local mail is delivered properly, in /etc/mail/local-host-names I put trailing-dot and non-trailing-dot versions of all of the FQDNs which might be encountered:

skew.org
skew.org.
chilled.skew.org
chilled.skew.org.
localhost.skew.org
localhost.skew.org.

I also edited /etc/mail/aliases to make sure that postmaster is an alias for root and root is an alias of my username. You can set this up however you want, even to forward elsewhere.

/etc/mail/mailer.conf is like a list of virtual symlinks for standard mail functions; they all need to be pointing to /usr/libexec/sendmail/sendmail.

To ensure Sendmail runs at system startup, /etc/rc.conf needs this:

sendmail_enable="YES"
sendmail_submit_enable="YES"
sendmail_outbound_enable="YES"
sendmail_msp_queue_enable="YES"

After all these configs are ready, run make all install restart from the /etc/mail directory, and it should generate the .mc files, update the databases, and fire up the server.

You can just run the same command after making any config changes. Look in /etc/mail/Makefile for other options.

Always test that email delivery actually works after you make any configuration changes!

Enable ccache

ccache is a cache for C compilers. It replaces your clang or gcc executables with its own wrappers and should result in a huge speedup of building any software written in C.

You can install it yourself now, or you can wait and let it be installed automatically when building something in C from the ports collection (provided /etc/make.conf is as described below):

  • portmaster devel/ccache

Add the following to /etc/make.conf (and /etc/src.conf if you want to use it when building the base system too):

# support devel/ccache; see /usr/local/share/doc/ccache/ccache-howto-freebsd.txt
# I think this isn't OPTIONS_SET+= CCACHE_BUILD because (in FreeBSD 11+) it is also for base, not just ports
WITH_CCACHE_BUILD=yes

Add the following to ~/.cshrc:

### devel/ccache port requires non-root users (even if su'd) set CCACHE_DIR
setenv CCACHE_DIR ~/.ccache

Upgrading from MySQL 5.6 to 5.7

Doing it in-place is not very smooth.

  • merge /var/db/mysql/my.cnf info /usr/local/etc/mysql/my.cnf
    • migrate all your settings as needed
    • set innodb_data_file_path to use 5M
  • rm /var/db/mysql/my.cnf
  • ln -s /usr/local/bin/mysqlcheck /usr/local/bin/mysql_check
  • in another terminal, tail -f /var/db/mysql/chilled.skew.org.err
  • service mysql-server start
  • now it should complain that it wants minimum 12 MB, so change it and restart

Dealing with mixed-case table names in database foo preventing server startup:

  • In my.cnf, comment out lower_case_table_names=1
  • service mysql-server start (hopefully it works now)
  • mysqldump -uroot -pCHANGEME --databases foo > /var/tmp/foo.sql
  • mysql -uroot -pCHANGEME
    • DROP DATABASE foo;
    • RESET MASTER;
    • QUIT;
  • service mysql-server stop
  • uncomment lower_case_table_names
  • service mysql-server start
  • mysql -uroot -pCHANGEME < /var/tmp/foo.sql
  • mysql_upgrade -u root -pCHANGEME
  • service mysql-server restart

Install MySQL

I would install 5.7 now if starting from scratch; what follows is my notes from installing 5.6:

  • portmaster databases/mysql56-server

This will install mysql56-client, cmake, perl, and libedit. cmake has many dependencies, including Python (py-sphinx), curl, expat, jsoncpp, and libarchive. Depending on whether you've got Perl and Python already (and up-to-date), this will take roughly 3 to 6 hours.

MySQL is a bit of a RAM hog. On a lightly loaded system, it should do OK, though. Just make sure you have swap space!

Secure it

Ensure the server won't be accessible to the outside world:

  • echo '[mysqld]\nbind-address=127.0.0.1\ntmpdir=/var/tmp' > /var/db/mysql/my.cnf

In newer versions of the port, my.cnf is now in /usr/local/etc/mysql.

Make it use less memory than normal

MySQL can be configured to use much less RAM than the default. For MySQL 5.6, I added this to the mysqld section of my.cnf:

innodb_buffer_pool_size=5M
innodb_log_buffer_size=256K
query_cache_size=0
max_connections=10
key_buffer_size=8
thread_cache_size=0
host_cache_size=0
innodb_ft_cache_size=1600000
innodb_ft_total_cache_size=32000000
thread_stack=131072
sort_buffer_size=32K
read_buffer_size=8200
read_rnd_buffer_size=8200
max_heap_table_size=16K
tmp_table_size=1K
bulk_insert_buffer_size=0
join_buffer_size=128
net_buffer_length=1K
innodb_sort_buffer_size=64K
binlog_cache_size=4K
binlog_stmt_cache_size=4K

This should keep the RAM footprint fairly low. Naturally, under these restrictions, performance will end up being worse if whatever is using the database needs to do a high volume of queries and updates.

For MySQL 5.7, here's my complete my.cnf (some values are different from the above):

# $FreeBSD: branches/2019Q1/databases/mysql57-server/files/my.cnf.sample.in 414707 2016-05-06 14:39:59Z riggs $

[client]
port                            = 3306
socket                          = /tmp/mysql.sock

[mysql]
prompt                          = \u@\h [\d]>\_
no_auto_rehash

[mysqld]
user                            = mysql
port                            = 3306
socket                          = /tmp/mysql.sock
bind-address                    = 127.0.0.1
basedir                         = /usr/local
datadir                         = /var/db/mysql
tmpdir                          = /var/db/mysql_tmpdir
slave-load-tmpdir               = /var/db/mysql_tmpdir
secure-file-priv                = /var/db/mysql_secure
log-bin                         = mysql-bin
#log-output                      = TABLE
log-output                      = FILE
master-info-repository          = TABLE
relay-log-info-repository       = TABLE
relay-log-recovery              = 1
slow-query-log                  = 1
server-id                       = 1
sync_binlog                     = 1
sync_relay_log                  = 1
#binlog_cache_size               = 16M
binlog_cache_size               = 4K
expire_logs_days                = 30
default_password_lifetime       = 0
enforce-gtid-consistency        = 1
gtid-mode                       = ON
safe-user-create                = 1
lower_case_table_names          = 1
explicit-defaults-for-timestamp = 1
myisam-recover-options          = BACKUP,FORCE
open_files_limit                = 32768
table_open_cache                = 16384
table_definition_cache          = 8192
net_retry_count                 = 16384
#key_buffer_size                 = 256M
key_buffer_size                 = 8M
max_allowed_packet              = 64M
query_cache_type                = 0
query_cache_size                = 0
long_query_time                 = 0.5
#innodb_buffer_pool_size         = 1G
innodb_buffer_pool_size         = 5M
innodb_data_home_dir            = /var/db/mysql
innodb_log_group_home_dir       = /var/db/mysql
#innodb_data_file_path           = ibdata1:128M:autoextend
innodb_data_file_path           = ibdata1:12M:autoextend
#innodb_temp_data_file_path      = ibtmp1:128M:autoextend
innodb_temp_data_file_path      = ibtmp1:12M:autoextend
innodb_flush_method             = O_DIRECT
innodb_log_file_size            = 256M
innodb_log_buffer_size          = 256K
innodb_write_io_threads         = 8
innodb_read_io_threads          = 8
innodb_autoinc_lock_mode        = 2
skip-symbolic-links

# all of the following added by mjb
max_connections=20
thread_cache_size=0
host_cache_size=0
innodb_ft_cache_size=1600000
innodb_ft_total_cache_size=32000000
thread_stack=256K
sort_buffer_size=32K
read_buffer_size=8200
read_rnd_buffer_size=8200
max_heap_table_size=16K
tmp_table_size=1K
bulk_insert_buffer_size=0
join_buffer_size=128
net_buffer_length=1K
innodb_sort_buffer_size=64K
binlog_stmt_cache_size=4K
general_log      = 1

[mysqldump]
max_allowed_packet              = 256M
quote_names
quick

It's possible some of these values are not optimal, but it's not easy to know what they should be.

Start it

  • echo 'mysql_enable="YES"' >> /etc/rc.conf
  • service mysql-server start – this may take a minute, as it may have to use a little bit of swap.
  • service mysql-server status – this is the only way to know if the server successfully started!

If there was a problem starting the server, look in /var/db/mysql for the log file. It will have your FQDN as the base name of the file. It should tell you exactly what went wrong.

Further secure it

If you are not restoring data from a backup (see next subsection), do the following to delete the test databases and set the passwords (yes, plural!) for the root account:

  • Refer to Securing the Initial MySQL Accounts.
  • mysql -uroot
    • DELETE FROM mysql.db WHERE Db='test';
    • DELETE FROM mysql.db WHERE Db='test\_%';
    • SET PASSWORD FOR 'root'@'localhost' = PASSWORD('foo'); – change foo to the actual password you want
    • SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('foo'); – use the same password
    • SET PASSWORD FOR 'root'@'::1' = PASSWORD('foo'); – use the same password
    • SELECT User, Host, Password FROM mysql.user WHERE user='root'; – see what other hosts have an empty root password, and either set a password or delete those rows. For example: DELETE FROM mysql.user WHERE Host='localhost.localdomain';
    • \q
  • mysqladmin -uroot -pfoo – This is to make sure the password works and mysqld is alive.

(If you were to just do mysqladmin password foo, it would only set the password for 'root'@'localhost'.)

Restore data from backup

On my other server, every day, I ran a script to create a backup of my MySQL databases. To mirror the data here, I can copy the resulting .sql file (bzip2'd), which can be piped right into the client on this machine to populate the database here:

  • bzcat mysql-backup-20151022.sql.bz2 | mysql -uroot -pfoofoo is the root password, of course. There is no root password on a new installation before MySQL 5.7, so omit the -p and password in that case.

If you get the error "@@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty" when starting the restore, this can be resolved by issuing the RESET MASTER command in the MySQL client.

If you have trouble restoring from a backup over top of an old installation, you can stop the server, remove the ib_logfile0, ib_logfile1, ibdata1 files, and optionally remove the database folders from /var/db/mysql. Upon starting the server, you should then have a 'clean' installation.

If you used the --all-databases option with mysqldump, the backed up data should include the mysql.db and mysql.user tables, so you should have all databases and account & password data. However, it seems my more recent dumps either did not have the user table data or it wasn't getting loaded for some reason, so I had to manually re-create the users and grant permissions.

To see what users exist:

mysql -uroot
connect mysql;
select user,host from user;

If you see any users you want to drop (adjust as needed): drop user 'username'@'localhost';

To create a new user (adjust as needed): create user 'username'@'localhost' identified by 'password';

To allow a user full control of database dbname: grant all privileges on dbname.* to 'username'@'localhost' identified by 'password';

After loading from backup, this may not be necessary, but I recommend also performing any housekeeping needed to ensure the tables are compatible with this server:

  • mysql_upgrade -pfoo --force
  • service mysql-server restart

Change socket file location

While trying to diagnose another problem, it was suggested that I make sure the mysql.sock file, normally in /tmp, is on a tmpfs file system (RAM disk). I had disabled tmpfs in my /etc/fstab because too many things expect /tmp to be huge, and I did not like having all that memory unavailable for other uses.

I decided to go ahead and create a small tmpfs file system mounted in a unique directory, just for sockets. I am not sure how small is too small, though. Can it be, say, 16 KB? Well, I am trying it out.

Add to /etc/fstab:

tmpfs   /tmpsockets     tmpfs   rw,mode=1777,size=16k   0       0
  • mkdir -m 1777 /tmpsockets
  • mount /tmpsockets
  • Edit php.ini to set mysqli.default_socket and pdo_mysql.default_socket to /tmpsockets/mysql.sock
  • Edit /usr/local/etc/mysql/my.cnf to set socket=/tmpsockets/mysql.sock (it is in 2 places!)

Restart MySQL and php-fpm.

Install Python symlinks

When Python is installed by another port (e.g. by cmake, as required to build MySQL), it may be just a specific version, e.g. the lang/python27 port, rather than the more generic lang/python2 or lang/python (which can end up installing different versions of Python depending on make.conf variables).

This means you may not get the recommended and pretty much standard symlinks python2 and python installed in /usr/local/bin. Well, there are ports for that! This should cover it:

  • [ -e /usr/local/bin/python2.7 -a ! -L /usr/local/bin/python2 ] && portmaster -x python27 lang/python2
  • [ ! -L /usr/local/bin/python ] && portmaster -x python27 lang/python

The -x python27 will prevent a time-consuming upgrade or rebuild of the python27 port.

Install nginx

I'm a longtime Apache httpd administrator (even co-ran apache.org for a while) but am going to see if nginx will work just as well for what I need:

  • HTTPS with SNI (virtual host) and HSTS header support
  • URL rewriting and aliasing
  • PHP (to support MediaWiki)
  • basic authentication
  • server-parsed HTML (for timestamp comments, syntax coloring)
  • fancy directory indexes (custom comments, but I can live without)

Let's get started:

  • portmaster -D www/nginx – installs PCRE as well
    • Modules I left enabled: DSO, IPV6, HTTP, HTTP_CACHE, HTTP_REWRITE, HTTP_SSL, HTTP_STATUS, WWW
    • HTTP/2-related modules I left enabled: HTTP_SLICE, HTTPV2, STREAM, STREAM_SSL
    • Modules I also enabled: HTTP_FANCYINDEX
  • echo 'nginx_enable="YES"' >> /etc/rc.conf

For security, I am not sharing my complete /usr/local/etc/nginx/nginx.conf here.

Since early 2016, some modules require dynamic loading, so with the above module set, you'll need this at the very top of the nginx.conf file:

load_module /usr/local/libexec/nginx/ngx_http_fancyindex_module.so;
load_module /usr/local/libexec/nginx/ngx_stream_module.so;

Try it out:

  • service nginx start
  • Visit your IP address in a browser (just via regular HTTP). You should get a "Welcome to nginx!" page.

Immediately I'm struck by how lightweight it is: processes under 14 MB instead of Apache's ~90 MB.

Enable HTTPS service

Prep for HTTPS support (if you haven't already done this):

  • Put your private key (.key) and cert (.crt or .pem) somewhere.
  • Create a 2048-bit Diffie-Hellman group: openssl dhparam -out /etc/ssl/dhparams.pem 2048

Enable HTTPS support by putting this in /usr/local/etc/nginx/nginx.conf for each HTTPS server:

    server {
        listen       443 ssl;
        server_name  localhost;
        root   /usr/local/www/nginx;

        ssl_certificate      /path/to/your/cert;
        ssl_certificate_key  /path/to/your/server_key;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_dhparam /etc/ssl/dhparams.pem;

        add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;" always;
        gzip off;

        location / {
            index  index.html index.htm;
        }
    }

This config includes HSTS support; "perfect" forward secrecy (PFS); and mitigation of the POODLE, CRIME, BEAST, and BREACH attacks. (CRIME attack mitigation is assumed because OpenSSL is built without zlib compression capability by default now.)

Unlike Apache, nginx does not support separate files for your certificate chain. The cert file used by nginx should contain not just your site's cert, but also any other certs that you don't expect clients (browsers) to trust, e.g. any intermediate certs, appended in order after your cert. Otherwise, some clients will complain or will consider your cert to be self-signed.

If you like, you can redirect HTTP to HTTPS:

    server {
        listen 80;
        server_name localhost;
        root /usr/local/www/nginx;
        return 301 https://$host$request_uri;
    }
  • service nginx reload
  • Check the site again, but this time via HTTPS. Once you verify it's working, you can tweak the config as you like.

If your server is publicly accessible, test it via the SSL Server Test by Qualys SSL Labs.

Handle temporarily offline sites

If a website needs to be taken down temporarily, e.g. for website backups, you can configure nginx to respond with a HTTP code 503 ("service temporarily unavailable") any time your backup script creates a file named ".offline" in the document root:

        location / {
            if (-f '$document_root/.offline') {
               return 503;
            }
            ...
        }

The backup script needs to remove the file when it's done, of course.

Alternatively, you can customize the 503 response page. Just make sure a sub-request for that custom page won't be caught by the "if":

        location /.website_offline.html { try_files $uri =503; internal; }

        location / {
            if (-f '$document_root/.offline') {
               error_page 503 /.website_offline.html;
               return 503;
            }
            ...
        }

Here is my custom 503 page for when my wiki is offline:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <title>wiki offline temporarily</title>
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
  </head>
  <body>
    <div style="float: left">
      <h1>Don't panic.</h1>
      <p>The wiki is temporarily offline.</p>
      <p>It might be for a daily backup, in which case it should be online within 15 minutes.</p>
    </div>
    <div>
      <!-- get your own: http://theoatmeal.com/comics/state_web_summer#tumblr -->
      <img style="float: right; width: 400px" src="//skew.org/oatmeal_tumbeasts/tbrun1.png" alt="[Tumbeast illustration by Matthew Inman (theoatmeal.com); license: CC-BY-3.0]">
    </div>
  </body>
</html>

nginx quirks

As compared to Apache, nginx has some quirks.

  • There is no support for .htaccess files, nor anything similar; only root controls how content is served.
  • In the config file, there is no way to toggle or test for modules.
  • There is nothing like AddHandler and Action; a content processor must be accessed via its own FastCGI server.
  • Server-side includes are rudimentary and do not include my longtime favorite instruction #flastmod.
  • Fancy directory indexes can have custom headers & footers, but the index itself cannot be customized (no adding descriptions or setting the width).
  • Root and Alias are inherited but cannot be overridden. You have to get clever with nested Location directives.
  • Intermediate TLS certificates must be in the same file as the server certificate.
  • The types directive must be a complete list of MIME type mappings. You can't include mime.types and add to it via types directives.
  • New log files are created mode 644 (rw-r--r--), owned by the user the worker processes run as ('www'). Their containing directories must be owned by the same user; it is not enough that they just be writable by that user via group permissions.

The Location directives are matched against the normalized request URI string in this order:

  1. Location = string
  2. longest matching Location ^~ prefix
  3. first matching Location ~ regex or Location ~* case-insensitive-regex
  4. longest matching Location prefix

I have seen #3 and #4 get mixed up, though.

Install PHP

  • portmaster lang/php72

For use via nginx, make sure the FPM option is checked (it is by default). FPM is a FastCGI Process Manager. It runs a server on localhost port 9000 which handles, via a binary protocol, the launching of PHP processes as if they were CGI scripts.

Configure nginx to use PHP FPM

  • echo 'php_fpm_enable="YES"' >> /etc/rc.conf
  • service php-fpm start

Add to /usr/local/etc/nginx/nginx.conf:

        location ~ [^/]\.php(/|$) {
            root /usr/local/www/nginx;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            fastcgi_split_path_info ^(.+?\.php)(/.*)$;
            fastcgi_intercept_errors on;
            if (!-f $document_root$fastcgi_script_name) {
                return 404;
            }
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            include        fastcgi_params;
        }
  • service nginx reload
  • echo '<?php var_export($_SERVER)?>' > /usr/local/www/nginx/test.php
  • echo '<?php echo phpinfo(); ?>' > /usr/local/www/nginx/phpinfo.php
  • In your browser, visit /test.php/foo/bar.php?v=1 and /phpinfo.php ... when confirmed working, move the test files to somewhere not publicly accessible.

Diagnosis of php-fpm worker crashes

After upgrading to FreeBSD 12 and PHP 7.2, I am finding that php-fpm works, but the workers sometimes crash when first used, resulting in this kind of message in /var/log/php-fpm.log: child 14116 exited on signal 11 (SIGSEGV) after 0.000112 seconds from start. I've Googled like crazy to figure out what is going wrong, but cannot find anything that applies to my situation.

It also does not matter if I use a package or build PHP from ports.

I asked about it in the forum, just to get ideas on how to get logs and core dumps, because the procedure was not obvious. Using some tips there, I got PHP error reporting going. It requires turning on several things in php.ini:

display_errors = On
log_errors = On
error_log = /var/log/php_errors.log
fastcgi.logging = 1
error_reporting = E_ALL

(display_errors=On is what makes the errors show up via stdout / embedded in web pages. It's not really necessary.)

In php-fpm.conf, I have adjusted these settings as well:

log_level = debug
catch_workers_output = yes

MySQL logs are in /var/db/mysql and you can get another one (default name: hostname.log) containing all user activity via this in my.cnf:

general_log=1
log-output=FILE

Since it contains all the SQL statements, the log will get huge fast, so only use it for debugging.

To get core dumps:

  • sysctl kern.sugid_coredump=1 – enable core dumps for setuid/setgid processes
  • sysctl kern.corefile=/var/tmp/%N.core – put core dumps in a directory the processes can write to

In php-fpm.conf, to get core dumps, you need to set rlimit_core = unlimited in two places (under global and www).

However, don't bother with core dumps if you didn't build PHP with the port's DEBUG option selected (equivalent to the --enable-debug configure option); otherwise there will be no symbols in the php-fpm executable.

Here's a little test script you can put in a PHP-enabled section of your website in order to report any problems as it gets the latest content of a MediaWiki page (by title, with underscores, without namespace) (replace foo with appropriate values):

<?php
        $dbname = "foo";
        $dbuser = "foo";
        $dbpass = "foo";
        $pagename = 'foo';
        echo '<h1>Testing...</h1>';
        try {
                $dsn = "mysql:host=localhost;dbname=$dbname";
                $dbh = new PDO($dsn, $dbuser, $dbpass, array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
                $query = "SELECT old_text FROM mw_text WHERE old_id IN (SELECT rev_text_id FROM mw_revision LEFT JOIN mw_page ON page_latest = rev_id WHERE page_title LIKE '" . $pagename . "');";
                $sth = $dbh->query($query);
                $result = $sth->fetch(PDO::FETCH_OBJ);
                $raw_content = $result->old_text;
                echo '<pre style="margin: 3em; border: 1px solid black; background: #eee; color: #000; white-space: pre-wrap;">';
                echo str_replace('<','&lt;',str_replace('&','&amp;',$raw_content));
                echo '</pre>';
                $sth = null;
                $dbh = null;
        } catch (PDOException $e){
                echo $e->getMessage();
        }
?>

This test script did not result in any core dumps, but I can produce them in MediaWiki (usually) when I try to edit pages. The dumps show that perhaps there's a problem with pcre ...maybe.

I am now asking about it on the mediawiki-l mailing list. So far, no leads.

I'm reminded of a pcre problem from 5 years ago. This got me to digging into my cache settings. In LocalSettings.php, I still had $wgMainCacheType set to CACHE_ACCEL instead of CACHE_NONE. The docs suggest CACHE_ACCEL was only ever for APC or xCache, neither of which is supported anymore, so I set it to CACHE_NONE. I also purged the objectcache table via the command TRUNCATE TABLE mw_objectache in the MySQL client. However, this did not stop the crashes.

Enable OPcache

Like Perl, the PHP interpreter compiles scripts to bytecode ("opcodes") every time they are run. This results in slow performance for large or often-used scripts, such as those typically used for web apps. Therefore it is pretty much essential to use some kind of "PHP accelerator" or web cache. PHP comes with built-in support for Zend OPcache. You have to install it and then enable it in your php.ini (the directives should already be there, just commented out).

  • portmaster www/php72-opcache
  • edit /usr/local/etc/php.ini to uncomment the opcache.enable line (it's near the bottom)
  • service php-fpm restart

OPcache stores and reuses bytecode in RAM, so on a low-memory system like the BBB you might want to tune it further by adjusting some settings in php.ini, particularly opcache.memory_consumption and opcache.max_accelerated_files. I'm finding that with MediaWiki installed, OPcache is going to eventually use as much RAM as you give it, with 23 MB of overhead, quickly soaring to over 40 MB when browsing articles. I would not allocate less than 64 MB and 3907 files to start. The default is 128 MB and 16229.

Here's a little script you can put on your website to check the status of the cache, e.g. to see what the memory usage is:

<?php
        $info = opcache_get_status();
        if($info) {
                echo '<p>OPcache status:</p><pre>'; print_r($info); echo '</pre>';
        } else {
                echo '<p>OPcache is not enabled.</p>';
        }
?>

Ignore any references you might find to XCache or APC. These are unmaintained and deprecated; OPcache is really the only option nowadays.

Periodically delete expired PHP session data

If you run PHP-based websites for a while, you probably notice session data tends to get left behind. This is because PHP defaults to storing session data in /tmp or /var/tmp, and has a 1 in 1000 chance of running a garbage collector upon the creation of a new session. The garbage collector will expire ones that are more than php.ini's session.gc_maxlifetime (24 minutes by default). You can increase the probability of it running, but you still must wait for a new session to be created, so it's really only useful for sites which get a new session created every 24 minutes or less. Otherwise, you're better off (IMHO) just running a script to clean out the stale session files. So I use the script below, invoked from root's crontab every 20 minutes:

#!/bin/sh
echo "Deleting the following stale sess_* files:"
find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc)
find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc) -delete

Of course you can store session data in a database if you want, and the stale file problem is avoided altogether. But then that's just one more thing that can break.

Here's what I put in root's crontab, via crontab -e:

# every hour, clear out the PHP session cache
10 * * * *  /usr/local/adm/clean_up_php_sessions > /dev/null 2>&1

Upgrade PHP

PHP needs to be upgraded practically every month due to security holes. You will see when you run pkg audit or read your nightly security reports.

  • portmaster php72
  • service php-fpm restart

Visit a PHP-based webpage to make sure it's working.

  • portmaster php72- may also be needed in order to get all the PHP extensions upgraded.

Mitigate listen queue overflows

I started getting some mysterious messages in my system logs like sonewconn: pcb 0xc2d1da00: Listen queue overflow: 16 already in queue awaiting acceptance (838 occurrences). It means a network daemon isn't handling incoming connections fast enough, so they start queuing up in the OS, and now there's too many of them.

The cause could be a denial-of-service attack, port scan, naturally heavy network loads, a bug in the daemon, or the daemon is starved for resources (e.g. due to some other problem on the system) and just running especially slowly. I think the latter is what's happening to me.

Each server which listens on TCP ports tells the kernel to accept a certain number of overflow connections for each port. It is a safety queue for when connections come in faster than the daemon can handle them. If the daemon doesn't pick a number, the default is whatever the kern.ipc.somaxconn value was at the time the server was started.

You can see the current limits with netstat -Lan. The overflow warnings come when there are 50%+ more connections than the limit. You can find the overloaded daemon by looking for a max. connections value which is slightly less than two-thirds the "already in queue" number from the warning. So "16 already in queue" means the culprit is a daemon with 10 max. connections: probably sendmail.

The kernel's default limit of 128 connections in the TCP listen queue (per port) may be too low for busy servers, so you can mitigate the issue somewhat by bumping up kern.ipc.somaxconn to 1024 or more. This is what was recommended in the tuning kernel limits info in The FreeBSD Handbook. However, the current version of the handbook says that the correct setting to adjust is actually kern.ipc.soacceptqueue. They are actually the same thing!

  • echo kern.ipc.soacceptqueue=1024 >> /etc/sysctl.conf
  • service sysctl start

After bumping up the kernel limit, you should restart all your servers which are still showing 128:

  • service php-fpm restart
  • service nginx restart
  • service local_unbound restart

sa-spamd (port 783) and sshd (port 22 or whatever) apparently use hard-coded limits of 128, so no need to restart them.

Now when you run netstat -Lan you should see php-fpm (port 9000), nginx (ports 80 & 443), and unbound (port 53) all now use higher limits like 1024 or 256. This is not really a solution though, especially if you have low network traffic; the problem may be that something else is hogging system resources, and the network daemon generating the warning is really just the first one to suffer.

Install MediaWiki

  • portmaster www/mediawiki130 – or whatever the latest version is.

In the 'make config' step, only enable MySQL. Disable sockets; that feature is only used by memcached.

Other dependencies which will be installed: various PHP extensions, and PHP & MySQL if you haven't already installed them.

  • service php-fpm reload

For my first install, I already have the database set up (restored from a backup of another installation), so instead of doing the in-place web install, I'll just copy my config, images and extensions from my other installation:

  • scp -pr 'otherhost:/usr/local/www/mediawiki/{AdminSettings.php,LocalSettings.php,images,robots.txt,favicon.ico}' /usr/local/www/mediawiki
  • scp -pr 'otherhost:/usr/local/www/mediawiki/extensions/{CheckUser,Cite,Nuke}' /usr/local/www/mediawiki/extensions

For future updates, the config and 3rd-party extension files should remain as-is; just be sure to check UPDATING for any important news, and before allowing web access, run cd /usr/local/www/mediawiki/maintenance && php update.php so the database tables are up-to-date.

LocalSettings.php tweaks

Obfuscate sensitive variables

In an attempt to increase security (though with a performance penalty), I've replaced the block of database variables in LocalSettings.php as well as the whole of AdminSettings.php with include("/path/to/db-vars.php");, with db-vars.php containing the block in question wrapped in <?php...?>. Those files are outside of any website's document root, yet still readable by the nginx worker process (e.g. owner or group 'www') and backup scripts, but not anyone else.

Simplify URLs

Like Wikipedia, I want to support short URLs like /wiki/articlename:

  • In LocalSettings.php, make MediaWiki generate short article links:
$wgScriptPath       = "";     # instead of /wiki
$wgArticlePath = "/wiki/$1";  # instead of /wiki/index.php?title=$1
  • In nginx.conf, make nginx internally rewrite these requested URLs to defaults (this is replacing my previous "location /" block):
        location / {
            index  index.php;
            rewrite ^/?wiki(/.*)?$ /index.php?title=$1 last;
            rewrite ^/*$ /index.php last;
        }
  • Also in nginx.conf, replace root /usr/local/www/nginx; with root /usr/local/www/mediawiki;.
  • service nginx reload

Now test it by browsing the wiki.

Suggested nginx config

This is what I use in my nginx.conf:

    server {
        listen       443 ssl;
        server_name  offset.skew.org;
        root         /usr/local/www/mediawiki;
        add_header   Strict-Transport-Security "max-age=31536000; includeSubdomains;" always;

        # deny access to certain SEO bots looking for places to upload backlinks;
        # see http://blocklistpro.com/content-scrapers/
        if ($http_user_agent ~* (AhrefsBot|SiteBot|XoviBot)) { return 403; }

        # allow access to custom 503 page configured in "location /" block
        location = /.wiki_offline.html { try_files $uri =503; internal; }

        # allow access to non-skin images; return 404 if not found
        location ^~ /resources/assets/ { try_files $uri =404; }
        location ^~ /images/ { try_files $uri =404; }

        # deny access to MediaWiki's internals
        location ^~ /cache/ { deny all; }
        location ^~ /docs/ { deny all; }
        location ^~ /extensions/ { deny all; }
        location ^~ /includes/ { deny all; }
        location ^~ /languages/ { deny all; }
        location ^~ /maintenance/ { deny all; }
        location ^~ /mw-config/ { deny all; } # comment out during installation
        location ^~ /resources/ { deny all; }
        location ^~ /serialized/ { deny all; }
        location ^~ /tests/ { deny all; }

        # deny access to core dumps
        location ~ ^.*\.core$ { deny all; }

        location / {
            # if .offline file exists, return custom 503 page
            if (-f '$document_root/.offline') {
               error_page 503 /.wiki_offline.html;
               return 503;
            }
            # if directory requested, pretend its index.php was requested
            index  index.php;

            # short URL support assumes LocalSettings.php has
            # $wgScriptPath       = "";
            # $wgArticlePath = "/wiki/$1";
            # if /wiki/foo requested, pretend it was /index.php?title=foo
            rewrite ^/?wiki(/.*)?$ /index.php?title=$1 last;

            # if anything nonexistent requested, pretend it was /index.php;
            try_files $uri /index.php;
        }

        # pass requests for existing .php scripts to PHP FPM
        location ~ [^/]\.php(/|$) {
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            fastcgi_split_path_info ^(.+?\.php)(/.*)$;
            fastcgi_intercept_errors on;
            if (!-f $document_root$fastcgi_script_name) {
                return 404;
            }
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            include        fastcgi_params;
        }

    }

Make old installation use InnoDB tables

After an upgrade, Mediawiki's maintenance/update.php script was then choking with this error: Error: 1785 Statement violates GTID consistency: Updates to non-transactional tables can only be done in either autocommitted statements or single-statement transactions, and never in the same statement as updates to transactional tables. (localhost)

Some Googling led me to get the impression that it has something to do with older Mediwiki installations using MyISAM instead of InnoDB. Switching them to InnoDB is safe, although mw_searchindex needs to remain MyISAM for now. You can get a list of commands to alter the tables like this (in mysql client):

SELECT CONCAT('ALTER TABLE ',table_schema,'.',table_name,' engine=InnoDB;') 
FROM information_schema.tables 
WHERE engine = 'MyISAM';

Edit the list to remove the extraneous formatting, remove mw_searchindex, and remove any other tables you don't want to convert. Then copy-paste it into the mysql client. (Another example)

Install rsync

  • portmaster net/rsync

Install procmail

  • portmaster mail/procmail

Install mutt

Mutt is an email client with an interface familiar to Elm users.

  • portmaster mail/mutt

Additional options I enabled: SIDEBAR_PATCH. Options I disabled: HTML, IDN, SASL, XML. These dependencies will be installed: db5, mime-support.

Install tt-rss

Tiny Tiny RSS is an RSS/Atom feed aggregator. You can use its own web-based feed reader or an external client like Tiny Reader for iOS.

  • portmaster www/tt-rss

Options I disabled: GD (no need for generating QR codes). These dependencies will be installed: php56-mysqli, php56-pcntl, php56-curl, php56-xmlrpc, php56-posix.

If it were a new installation, I'd have to create the database, source cat /usr/local/www/tt-rss/schema/ttrss_schema_mysql.sql | mysql -uroot -pfoo to set up the tables, and then edit /usr/local/www/tt-rss/config.php. But since I already have the database, this is essentially an upgrade, so I need to treat it as such:

  • echo 'ttrssd_enable="YES"' >> /etc/rc.conf.
  • Since the update daemon only supports shutdown signals, set up log rotation like this:
    • Add an entry for /var/log/ttrssd.log in /etc/newsyslog.conf (or better yet, in a new file in the directory /usr/local/etc/newsyslog.conf.d) to rotate at, say, "@T01" (01:00), e.g. /var/log/ttrssd.log 644 3 * @T01 JN
    • Add root cron jobs to run service ttrssd stop > /dev/null 2>&1 1 minute before rotation, and service ttrssd start > /dev/null 2>&1 at 1 or 2 minutes after.
  • Install the clean-greader theme (but it is broken as of late 2018/early 2019):
  • cd /usr/local/www/tt-rss
  • Edit config.php as needed to replicate my old config, but be sure to set SINGLE_USER_MODE in it.

You will need to edit the version string in clean-greader.css (the renamed wrapper.css) as explained in the FAQ. You can see the version string of tt-rss via this command:

  • head -3 /usr/local/www/tt-rss/include/version.php

So for example I had to change it from "17.4" to "17.4 (d2957a2718)" before it would work!

Make sure nginx is set up as needed. Most online instructions I found are for when you use a dedicated hostname for your server, whereas I run it from an aliased URL. A working config is below. It assumes the root directory is not set at the server block level, and it will serve up my custom 503 page (explained elsewhere) when the database is offline.

        location ^~ /tt-rss/cache/ { deny all; }
        location ^~ /tt-rss/classes/ { deny all; }
        location ^~ /tt-rss/locale/ { deny all; }
        location ^~ /tt-rss/lock/ { deny all; }
        location ^~ /tt-rss/schema/ { deny all; }
        location ^~ /tt-rss/templates/ { deny all; }
        location ^~ /tt-rss/utils/ { deny all; }
        location = /tt-rss/.reader_offline.html { root /usr/local/www; try_files $uri =503; internal; }
        location ~ ^/tt-rss/.*\.php$ {
            root /usr/local/www;
            fastcgi_intercept_errors on;
            if (-f '$document_root/tt-rss/.offline') {
               error_page 503 /tt-rss/.reader_offline.html;
               return 503;
            }
            fastcgi_param  SCRIPT_FILENAME  $request_filename;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            include        fastcgi_params;
        }
        location /tt-rss/ {
            root /usr/local/www;
            if (-f '$document_root/tt-rss/.offline') {
               error_page 503 /tt-rss/.reader_offline.html;
               return 503;
            }
            index index.php;
        }
  • service nginx reload – you will need to do this even when upgrading sometimes (like if a new PHP module was installed).
  • Visit the site. If it goes straight to the feed reader, no upgrades were needed. If you have trouble and keep getting "primary script unknown" errors, consult Martin Fjordvald's excellent blog post covering all the possibilities.
  • Edit config.php again and unset SINGLE_USER_MODE.
  • Visit the site and log in. All should be well.

Install SpamAssassin

Install sa-utils

I prefer to just install the mail/sa-utils port; it will install SpamAssassin as a dependency.

The sa-utils port adds a script: /usr/local/etc/periodic/daily/sa-utils. This script will run sa-update and restart spamd every day so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.

  • portmaster mail/sa-utils
    • When prompted for sa-utils, enable SACOMPILE. This will result in re2c being installed.
    • When prompted for spamassassin, I enabled: SSL, DCC, DKIM, RELAY_COUNTRY, and GNUPG.
    • I have not seen enough benefit from Pyzor, Razor, and SPF to warrant the overhead, especially on a BBB.
    • Yes, I disabled the "recommended" AS_ROOT option. This is a very confusingly named option. You don't want it enabled if your users have ~/.spamassassin directories containing their own user_prefs and Bayes databases.

The spamassassin post-install message mentions the possibility of running spamd children as a non-root user via the -u spamd flag. If AS_ROOT is enabled in the port, this flag will be set by default (along with -c -H /var/spool/spamd) in /usr/local/etc/rc.d/sa-spamd.

The main spamd process always runs as root. If I understand correctly, the normal behavior without -u (i.e. with AS_ROOT disabled) is that spamd children run as root, but then drop privileges temporarily to run as the user who invoked spamc. The -u option simply changes this behavior so that the children always run as the given user and do not drop privileges.

When you have allow_user_rules 1 in /usr/local/etc/mail/spamassassin/local.cf, spamd always tries to create and modify config files in user home directories. Running with -u interferes with this. It mostly works, but you get error messages related not being able to write to ~/.spamassassin directories. The messages may be "permission denied" or "tie failed"; I've seen both.

It is impractical to deal with this by granting world write permission on users' ~/.spamassassin directories. If you want to run with -u, the proper thing to do is to either use only global (not per-user) settings and databases, or use SQL databases, or use virtual user configurations via the option --virtual-config-dir=/something/%u. (With the latter option, I think you could then give users permission to edit their configs, though I have not tested this.)

For now to keep it simple, I am just running the spamd children as root, i.e. not using -u at all.

Discussion: https://lists.freebsd.org/pipermail/freebsd-ports/2016-December/106349.html

  • When prompted for the various Perl modules (dependencies of Spamassassin), I used all the default options. Do not disable IDN support! SpamAssassin apparently requires it. (Otherwise it will complain at startup about not finding several IDN-related modules.)
  • When prompted for dcc, I disabled the DCC milter option (a milter is a filter for Sendmail) and accepted the license.

See the sa-utils setup info below for further info.

GeoIP setup

  • Enabling RELAY_COUNTRY results in net/p5-Geo-IP being installed (and the RelayCountry plugin enabled in init.pre), so it's a good idea to add this to root's crontab via crontab -e:
# on the 8th day of every month, update the GeoIP databases
50 0 8 * *	[-x /usr/local/bin/geoipupdate.sh ] && /usr/local/bin/geoipupdate.sh > /dev/null 2>&1
  • Run /usr/local/bin/geoipupdate.sh once if you didn't do it after the GeoIP install.

sa-update setup

  • Assuming you enabled SACOMPILE, make sure this line in /usr/local/etc/mail/spamassassin/v320.pre is not commented out:
    loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
  • Put the flags sa-update needs in /etc/periodic.conf. Pick one:
    • Core rulesets: daily_sa_update_flags="-v --gpgkey 24F434CE --channel updates.spamassassin.org"
    • Core + "Sought" rulesets: daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org"
    • To use the "Sought" ruleset, you need to run fetch http://yerp.org/rules/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY
  • Test sa-utils: /usr/local/etc/periodic/daily/sa-utils
  • If it successfully fetches and compiles the rules and restarts spamd, then you can safely add daily_sa_quiet="yes" to /etc/periodic.conf so the verbose output isn't in your nightly emails.

Allow DCC traffic

DCC helps SpamAssassin to give bulk mail a higher score. This means legitimate mailing list posts will also be scored higher, so using it means you have to be vigilant about whitelisting or avoiding scanning mailing list traffic.

To enable DCC checking, assuming you enabled the DCC option when building the SpamAssassin port:

  • Make sure the appropriate line is uncommented in /usr/local/etc/mail/spamassassin/v310.pre.
  • Make sure UDP traffic is allowed in & out on port 6277. Assuming you set up the "workstation" IPFW firewall, this means:
    • Add 6277/udp to the firewall_myservices line in /etc/rc.conf.
    • Just to get it working for now, run ipfw add 3050 allow tcp from any to me dst-port 6277

See the DCC FAQ for more info on the firewall requirements.

Start and test it

Make sure it's ready to run:

  • echo 'spamd_enable="YES"' >> /etc/rc.conf
  • echo 'spamd_flags="-c -H /var/spool/spamd"' >> /etc/rc.conf

Now you can start up spamd:

  • service sa-spamd start

Assuming you installed procmail, make sure your ~/.forward contains something like this:

"|exec /usr/local/bin/procmail || exit 75"

And make sure your ~/.procmailrc contains something like this:

:0fw: spamassassin.lock
* < 600000
|/usr/local/bin/spamc

Keep in mind when editing your .procmailrc that you want to avoid running spamassassin on administrative messages or mailing list traffic.

Now send yourself a test message from another host. The message should arrive in your inbox with X-Spam-* headers added. Check /var/log/maillog for errors.

Enable short-circuit rules

  • In /usr/local/etc/mail/spamassassin/v320.pre, uncomment loadplugin Mail::SpamAssassin::Plugin::Shortcircuit.
  • In /usr/local/etc/mail/spamassassin/local.cf, uncomment all the lines that begin with shortcircuit.
  • Create /usr/local/etc/mail/spamassassin/shortcircuit.cf, using the content at https://wiki.apache.org/spamassassin/ShortcircuitingRuleset.
  • service sa-spamd reload

Suggested spamc configuration

Create /usr/local/etc/mail/spamassassin/spamc.conf with the following content:

# max message size for scanning = 600k
-s 600000

# prefer IPv4
-4

This is for local users running spamc to send mail to spamd for scanning, like in the .procmailrc example above.

Enable DNSBL checks when using a major ISP

Some or all of the DNSBL checks will likely fail if you rely on a major ISP's DNS servers. You have to run your own caching nameserver which is configured to not "forward" queries for the DNSBL zones.

See https://offset.skew.org/wiki/User:Mjb/Unbound_on_FreeBSD_10#Setup_for_DNSBL_lookups where I documented my setup (it's Option 2).

Suggestions for local.cf

Just a few other things I added to local.cf, affecting all scanned mail:

Speed improvements

lock_method flock
bayes_learn_to_journal 1

The flock method of file locking is ideal if the Bayes databases (e.g. in user ~/.spamassassin/bayes directories) won't ever be accessed over NFS; it will be faster than the default nfssafe method.

bayes_learn_to_journal 1 delays writing Bayes data so there will be less risk of the databases being locked by simultaneous spamd processes.

Add verbose headers

Add more X-Spam-* headers to explain more fully what tests were run and how the score was affected.

add_header      all Report _REPORT_

Allow users to define their own rules

allow_user_rules 1

This allows the processing of custom rules that users put in ~/.spamassassin/user_prefs. Obviously not something you want to do if your don't trust your users to write rules that don't bog down the system or cause mail to be lost.

Adjusting scores for mailing lists

header  FROM_MAILING_LIST       exists:List-Id
score   FROM_MAILING_LIST       -0.1

header  EXAMPLE_LIST        List-Id =~ /<[^.]+\.[^.]+\.example\.org>/
score   EXAMPLE_LIST        -5.0

Users can then further further adjust these scores in their ~/.spamassassin/user_prefs:

score FROM_MAILING_LIST -1.0
score EXAMPLE_LIST -100.0

Whitelist hosts

# maybe not ideal, but at one point I missed some legit eBay mail
whitelist_from_rcvd *.ebay.com ebay.com

Favor mail originating locally

# probably not spam if it originates here (default score 0)
score NO_RELAYS 0 -5 0 -5

# hosts appearing in Received: headers of legitimate bounces
# (bounces for mail that originated here)
# as per https://wiki.apache.org/spamassassin/VBounceRuleset
whitelist_bounce_relays foo.example.org

Install Icecast

Clients (listeners) will connect to an Icecast server will connect to my Icecast server in order to listen to the SHOUTcast v1 stream (AAC or MP3) which I'll be generating elsewhere and transmitting to the server.

  • portmaster audio/icecast2
  • echo 'icecast_enable="YES"' >> /etc/rc.conf
  • cp /usr/local/etc/icecast.xml.sample /usr/local/etc/icecast.xml
  • edit /usr/local/etc/icecast.xml. Change location, admin, passwords, hostname, listen-socket port. Uncomment shoutcast-mount, ssl-certificate and changeowner/user/group. If you use "/" as a mount point (e.g. in shoutcast-mount), comment out or change the alias for "/". Uncomment another listen-socket port and ssl "1" for admin purposes.
  • mkdir /var/log/icecast
  • chmod a+rwx /var/log/icecast (The log directory must be writeable by the icecast process.)

Generate a combined private/public key pair for TLS encryption:

  • cd /usr/local/share/icecast
  • openssl genrsa -out icecast-private.key 2048
  • openssl req -sha256 -new -key icecast-private.key -out icecast-cert.csr -subj '/CN=foo.example.org' -nodes (replace foo.example.org with the actual FQDN)
  • openssl x509 -req -days 720 -in icecast-cert.csr -signkey icecast-private.key -out icecast-cert.crt
  • cat icecast-private.key icecast-cert.crt > icecast.pem

The resulting icecast.pem file must be readable by the icecast process. This key pair is sufficient to establish encryption, but web browsers will complain or prevent access because the certificate (the public key) is self-signed. So another option, if you already have a cert signed by a widely trusted CA, is to make icecast.pem be the concatenation of 1. the private key used to generate that cert, and 2. the full chain of certs, ending with the cert itself. Of course, if you do that, make sure it is only readable by root and the icecast process.

Allow traffic through the firewall:

  • Assuming you set up a 'workstation'-type ipfw firewall, add appropriate TCP ports to firewall_myservices in /etc/rc.conf.
  • service ipfw restart
  • sh /etc/ipfw.rules (reload custom ipfw rules because the restart flushed them).

Start the server:

  • service icecast2 start
  • tail -f /var/log/icecast/error.log – watch for any problems

Try connecting a source and a listener. Try visiting the server URL with the path /status.xsl.

Distributed computing projects

This is a tale of failure. At the time I tried, none of the projects supported by BOINC had native support for ARM processors. This includes my longtime favorite, distributed.net. (distributed.net has since added some ARM support, but only for Raspberry Pi.) So it's not an option to run these on the BeagleBone Black right now.

Nevertheless, here are the notes I started taking when I tried to get something working:

I like to run the distributed.net client on all my machines, but it is not open-source, and there are no FreeBSD armv6 builds yet.

Ordinarily you can run the client through BOINC with the Moo! Wrapper, but this doesn't work either. Here's the general idea with BOINC, though:

Install BOINC and start the client:

  • portmaster net/boinc – this will install several dependencies, including Perl. In the 'make config' screens for those, I generally disable docs & examples, X11, NLS (for now), and IPv6 (for now). When installing Perl, I chose to disable 64bit_int because it says "on i386".
  • echo boinc_client_enable="YES" >> /etc/rc.conf
  • service boinc-client start — there's a bug in the port; it writes the wrong pid to the pidfile, so subsequent 'service' commands will fail
  • Create account on the BOINC project page you're interested in
  • Go to your account info on that page and click on Account Keys
  • Create ~boinc/account_whatever.xml as instructed. Put the account key (not weak key) in a file, e.g. ~boinc/whatever.key.
  • boinccmd --project_attach http://moowrap.net/ `cat ~boinc/whatever.key`
  • tail -f ~boinc/stdoutdae.txt — this is the log

Blast! Look what comes up in the log: This project doesn't support computers of type armv6-pc-freebsd

None of the projects I tried (Moo!, SETI@Home, Enigma@Home) are supported. So I went ahead and commented out the boinc_client_enable line in /etc/rc.conf and manually killed the boinc-client process.

I later filed a freebsd-armv6 client port request at distributed.net.