Difference between revisions of "User:Mjb/FreeBSD on BeagleBone Black/Additional software"

From Offset
Jump to navigationJump to search
(Install MediaWiki)
(Configure nginx to use PHP FPM)
Line 316: Line 316:
 
             fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
 
             fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
 
             fastcgi_split_path_info ^(.+?\.php)(/.*)$;
 
             fastcgi_split_path_info ^(.+?\.php)(/.*)$;
 +
            fastcgi_intercept_errors on;
 
             if (!-f $document_root$fastcgi_script_name) {
 
             if (!-f $document_root$fastcgi_script_name) {
 
                 return 404;
 
                 return 404;

Revision as of 03:29, 15 November 2015

This is a continuation of my FreeBSD on BeagleBone Black notes. Any questions/comments, email me directly at root (at) skew.org.

Some of my echo commands require support for \n, e.g. by setting setenv ECHO_STYLE both in tcsh.

Conveniences

Install nano

I prefer to use a 'visual' text editor with familiar command keys, multi-line cut & paste, and regex search & replace. I never got the hang of the classic editor vi, I find emacs too complicated, and ee is too limited. I used pico for many years, and now use nano, which is essentially a pico clone with more features.

Pretty much anytime I run portmaster to install or upgrade something, I actually run portmaster -D so it doesn't prompt me at the end about keeping the distfiles.
  • portmaster editors/nano

See my nano configuration files document for configuration info.

Install Perl libwww

I like to use the HEAD and GET commands from time to time, to diagnose HTTP problems. These are part of Perl's libwww module, which is installed by other ports like Spamassassin. Those commands are nice to have anyway, so I like to install them right away:

  • portmaster www/p5-libwww

This will install a bunch of other Perl modules as dependencies.

Build 'locate' database

Why wait for this to run on Sunday night? Do it now so the locate command will work:

  • /etc/periodic/weekly/310.locate

OpenSSL config

The base system comes with OpenSSL libraries installed, as well as the command-line tool /usr/bin/openssl. Its configuration file is expected to be in /etc/ssl/openssl.cnf.

At some point it's likely you'll end up with the security/openssl port installed as well (e.g. by installing OpenSMTPD or by enabling HTTPS support when installing nginx). This results in a /usr/local/bin/openssl being installed, and its config file is /usr/local/openssl/openssl.cnf, which is not created by default. You can create it yourself:

  • cd /usr/local/openssl && cp openssl.cnf.sample openssl.cnf

Of course, you could create a symlink if you want both to share the same config file.

Replacement services

Install OpenNTPD

Instead of the stock ntpd, I prefer OpenNTPD because it's slightly easier to configure and will be safer to update. (I also was perhaps a bit overly paranoid about the stock ntpd's requirement of always listening to UDP port 123.)

  • portmaster net/openntpd
  • In /etc/rc.conf:
ntpd_enable="NO"
openntpd_enable="YES"
openntpd_flags="-s"

If you like, you can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).

Logging is same as for the stock ntpd.

  • service ntpd stop (obviously not necessary if you weren't running the stock ntpd before)
  • service openntpd start

You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:

ntp engine ready
set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s)
peer x.x.x.x now valid
adjusting local clock by -0.046633s

Because of the issues with Unbound needing accurate time before it can resolve anything, I am going to experiment with putting time.nist.gov's IP address in /etc/hosts as the local alias 'timenistgov':

timenistgov 128.138.141.172

...and then have that be the first server checked in /usr/local/etc/ntpd.conf:

server timenistgov
servers pool.ntp.org

The hope is that the IP address will suffice when DNS is failing!

Later, I can set up a script to try to keep the timenistgov entry in /etc/hosts up-to-date. Of course, this will not help if they ever change the IP address while the BBB is offline.

Install OpenSMTPD

The snapshots for the BBB come with the Sendmail daemon disabled in /etc/rc.conf, so immediately some emails (from root to root) start plugging up the queue, as you can see in /var/log/maillog.

This is what's in /etc/rc.conf:

sendmail_enable="NONE"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"

Something interesting: from the messages in /var/log/maillog about missing /etc/mail/certs, it looks like the client supports STARTTLS without having to be custom-built with SASL2 like I had to do in FreeBSD 8. Not sure what's up with that.

Rather than enabling Sendmail, I am going to try OpenSMTPD now.

  • portmaster mail/opensmtpd – also installs various dependencies, including OpenSSL
  • echo smtpd_enable="YES" >> /etc/rc.conf
  • cp /usr/local/etc/mail/smtpd.conf.sample /usr/local/etc/mail/smtpd.conf

Edit smtpd.conf to your liking. You probably want the following, at the very least (and replace example.org with your domain, or comment out that line if you're not accepting mail from outside the BBB):

# This is the smtpd server system-wide configuration file.
# See smtpd.conf(5) for more information.

# To accept external mail, replace with: listen on all
listen on 127.0.0.1
listen on ::1

# If you edit the file, you have to run "smtpctl update table aliases"
table aliases file:/usr/local/etc/mail/aliases

# If 'from local' is omitted, it is assumed
accept from any for domain "example.org" alias <aliases> deliver to mbox
accept for local alias <aliases> deliver to mbox
accept for any relay

Then start the service:

  • service smtpd start

OpenSMTPD problems

The service first runs smtpd -n to do a sanity check on the smtpd.conf file. I noticed two problems with this:

1. If hostname returns a non-FQDN which is not resolvable (e.g. the default, "beaglebone", and this name is not mentioned in /etc/hosts), then smtpd may fail at first with a strange error message:

Performing sanity check on smtpd configuration:
invalid hostname: getaddrinfo() failed: hostname nor servname provided, or not known
/usr/local/etc/rc.d/smtpd: WARNING: failed precmd routine for smtpd

To work around this, make sure the result of running hostname is a FQDN like "beaglebone.example.org.", which requires modifying the hostname line in /etc/rc.conf, or just add the unqualified hostname as another alias for localhost in /etc/hosts, which is a good idea to do anyway.

2. smtpd consumes all available memory for over 20 minutes if I leave the table aliases line in the config. It eventually works, but not until after other essential services have failed due to the memory churn. This issue is also affecting runs of makemap and the nightly run of smtpdctl by /etc/periodic/daily/500.queuerun. For the latter, I get numerous "swap_pager_getswapspace(16): failed" messages interspersed with "pid 41060 (smtpctl), uid 0, was killed: out of swap space".

In late October 2015, I reported both issues:

  • re: issue 1, added comments to a bug report of similar behavior on a Raspberry Pi 2
  • re: issue 2, submitted a new bug report

Install MySQL

  • portmaster databases/mysql56-server

This will install mysql56-client, cmake, perl, and libedit. cmake has many dependencies, including Python (py-sphinx), curl, expat, jsoncpp, and libarchive. Depending on whether you've got Perl and Python already (and up-to-date), this will take roughly 3 to 6 hours.

MySQL is a bit of a RAM hog. On a lightly loaded system, it should do OK, though. Just make sure you have swap space!

Secure and start it

Ensure the server won't be accessible to the outside world, enable it, and start it up:

  • echo '[mysqld]\nbind-address=127.0.0.1\ntmpdir=/var/tmp' > /var/db/mysql/my.cnf
  • echo 'mysql_enable="YES"' >> /etc/rc.conf
  • service mysql-server start – this may take a minute, as it will have to use a little bit of swap.

If you are not restoring data from a backup (see next subsection), do the following to delete the test databases and set the passwords (yes, plural!) for the root account:

  • Refer to Securing the Initial MySQL Accounts.
  • mysql -uroot
    • DELETE FROM mysql.db WHERE Db='test';
    • DELETE FROM mysql.db WHERE Db='test\_%';
    • SET PASSWORD FOR 'root'@'localhost' = PASSWORD('foo'); – change foo to the actual password you want
    • SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('foo'); – use the same password
    • SET PASSWORD FOR 'root'@'::1' = PASSWORD('foo'); – use the same password
    • SELECT User, Host, Password FROM mysql.user WHERE user='root'; – see what other hosts have an empty root password, and either set a password or delete those rows. For example: DELETE FROM mysql.user WHERE Host='localhost.localdomain';
    • \q
  • mysqladmin -uroot -pfoo – This is to make sure the password works and mysqld is alive.

(If you were to just do mysqladmin password foo, it would only set the password for 'root'@'localhost'.)

Restore data from backup

On my other server, every day, I ran a script to create a backup of my MySQL databases. To mirror the data here, I can copy the resulting .sql file (bzip2'd), which can be piped right into the client on this machine to populate the database here:

  • bzcat mysql-backup-20151022.sql.bz2 | mysql -uroot -pfoofoo is the root password, of course

The backed up data includes the mysql.db and mysql.user tables, thus includes all databases and account & password data from the other server. Obviously there is some risk if the other server has insecure accounts and test databases.

After loading from backup, I recommend also performing any housekeeping needed to ensure the tables are compatible with this server:

  • mysql_upgrade -pfoo --force
  • service mysql-server restart

Install nginx

I'm a longtime Apache httpd administrator (even co-ran apache.org for a while) but am going to see if nginx will work just as well for what I need:

  • HTTPS with SNI (virtual host) and HSTS header support
  • URL rewriting and aliasing
  • PHP (to support MediaWiki)
  • basic authentication
  • server-parsed HTML (for timestamp comments, syntax coloring)
  • fancy directory indexes (custom comments, but I can live without)

Let's get started:

  • portmaster -D www/nginx – installs PCRE as well
    • Modules I left enabled: IPV6, HTTP, HTTP_CACHE, HTTP_REWRITE, HTTP_SSL, HTTP_STATUS, WWW
    • Modules I also enabled: HTTP_FANCYINDEX
  • echo 'nginx_enable="YES"' >> /etc/rc.conf

Try it out:

  • service nginx start
  • Visit your IP address in a browser (just via regular HTTP). You should get a "Welcome to nginx!" page.

Immediately I'm struck by how lightweight it is: processes under 14 MB instead of Apache's ~90 MB.

Enable HTTPS service

Prep for HTTPS support (if you haven't already done this):

  • Put your private key (.key) and cert (.crt or .pem) somewhere.
  • Create a 2048-bit Diffie-Hellman group: openssl dhparam -out /etc/ssl/dhparams.pem 2048

Enable HTTPS support by putting this in /usr/local/etc/nginx/nginx.conf for each HTTPS server:

    server {
        listen       443 ssl;
        server_name  localhost;
        root   /usr/local/www/nginx;

        ssl_certificate      /path/to/your/cert;
        ssl_certificate_key  /path/to/your/server_key;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_protocols TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_dhparam /etc/ssl/dhparams.pem;

        add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;" always;
        gzip off;

        location / {
            index  index.html index.htm;
        }
    }

This config includes HSTS support; "perfect" forward secrecy (PFS); and mitigation of the POODLE, CRIME, BEAST, and BREACH attacks. (CRIME attack mitigation is assumed because OpenSSL is built without zlib compression capability by default now.)

Unlike Apache, nginx does not support separate files for your certificate chain. The cert file used by nginx should contain not just your site's cert, but also any other certs that you don't expect clients (browsers) to trust, e.g. any intermediate certs, appended in order after your cert. Otherwise, some clients will complain or will consider your cert to be self-signed.

If you like, you can redirect HTTP to HTTPS:

    server {
        listen 80;
        server_name localhost;
        root /usr/local/www/nginx;
        return 301 https://$host$request_uri;
    }
  • service nginx reload
  • Check the site again, but this time via HTTPS. Once you verify it's working, you can tweak the config as you like.

If your server is publicly accessible, test it via the SSL Server Test by Qualys SSL Labs.

Handle temporarily offline sites

If a website needs to be taken down temporarily, e.g. for website backups, you can configure nginx to respond with a HTTP code 503 ("service temporarily unavailable") any time your backup script creates a file named ".offline" in the document root:

        location / {
            if (-f '$document_root/.offline') {
               return 503;
            }
            ...
        }

The backup script needs to remove the file when it's done, of course.

Alternatively, you can customize the 503 response page. Just make sure a sub-request for that custom page won't be caught by the "if":

        location /.website_offline.html { }

        location / {
            if (-f '$document_root/.offline') {
               error_page 503 /.website_offline.html;
               return 503;
            }
            ...
        }

Here is my custom 503 page for when my wiki is offline:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <title>wiki offline temporarily</title>
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
  </head>
  <body>
    <div style="float: left">
      <h1>Don't panic.</h1>
      <p>The wiki is temporarily offline.</p>
      <p>It might be for a daily backup, in which case it should be online within 15 minutes.</p>
    </div>
    <div>
      <!-- get your own: http://theoatmeal.com/comics/state_web_summer#tumblr -->
      <img style="float: right; width: 400px" src="//skew.org/oatmeal_tumbeasts/tbrun1.png" alt="[Tumbeast illustration by Matthew Inman (theoatmeal.com); license: CC-BY-3.0]">
    </div>
  </body>
</html>

Install PHP

  • portmaster lang/php56

For use via nginx, make sure the FPM option is checked (it is by default). FPM is a FastCGI Process Manager. It runs a server on localhost port 9000 which handles, via a binary protocol, the launching of PHP processes as if they were CGI scripts.

Configure nginx to use PHP FPM

  • echo 'php_fpm_enable="YES"' >> /etc/rc.conf
  • service php-fpm start

Add to /usr/local/etc/nginx/nginx.conf:

        location ~ [^/]\.php(/|$) {
            root /usr/local/www/nginx;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            fastcgi_split_path_info ^(.+?\.php)(/.*)$;
            fastcgi_intercept_errors on;
            if (!-f $document_root$fastcgi_script_name) {
                return 404;
            }
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            include        fastcgi_params;
        }
  • service nginx reload
  • echo '<?php var_export($_SERVER)?>' > /usr/local/www/nginx/test.php
  • echo '<?php echo phpinfo(); ?>' > /usr/local/www/nginx/phpinfo.php
  • In your browser, visit /test.php/foo/bar.php?v=1 and /phpinfo.php ... when confirmed working, move the test files to somewhere not publicly accessible.

Periodically delete expired PHP session data

If you run PHP-based websites for a while, you probably notice session data tends to get left behind. This is because PHP defaults to storing session data in /tmp or /var/tmp, and has a 1 in 1000 chance of running a garbage collector upon the creation of a new session. The garbage collector will expire ones that are more than php.ini's session.gc_maxlifetime (24 minutes by default). You can increase the probability of it running, but you still must wait for a new session to be created, so it's really only useful for sites which get a new session created every 24 minutes or less. Otherwise, you're better off (IMHO) just running a script to clean out the stale session files. So I use the script below, invoked from root's crontab every 20 minutes:

#!/bin/sh
echo "Deleting the following stale sess_* files:"
find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc)
find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc) -delete

Of course you can store session data in a database if you want, and the stale file problem is avoided altogether. But then that's just one more thing that can break.

Here's what I put in root's crontab, via crontab -e:

# every hour, clear out the PHP session cache
10 * * * *  /usr/local/adm/clean_up_php_sessions > /dev/null 2>&1

Install MediaWiki

  • unalias ls; unsetenv CLICOLOR_FORCE – see below.
  • portmaster www/mediawiki125 – or whatever the latest version is.

In the 'make config' step, only enable MySQL and xCache. Disable sockets; that feature is only used by memcached. Don't use pecl-APC because last I checked, you can't use it with PHP 5.6.

Other dependencies which will be installed: php56-zlib, php56-iconv, libiconv, php56-mbstring, oniguruma4, php56-mysql, php56-json, php56-readline, php56-hash, php56-ctype, php56-dom, php56-xml, php56-xmlreader, php56-session, and www/xcache.

The build of oniguruma4 will fail if 'ls' is configured to produce color output, hence the unalias & unsetenv commands. See my .cshrc for more info.

  • service php-fpm reload
  • cp /usr/local/share/examples/xcache/xcache.ini /usr/local/etc/php
  • Edit /usr/local/etc/php/xcache.ini and set xcache.admin.user and xcache.admin.pass. Consider adding a password hint as a comment. Also consider dropping xcache.size down to something smaller than the default of 60M, maybe 16M to start.

I already have the database set up (restored from a backup of another installation), so instead of doing the in-place web install, I'll just copy my config, images and extensions from my other installation:

  • scp -pr 'otherhost:/usr/local/www/mediawiki/{AdminSettings.php,LocalSettings.php,images,robots.txt,favicon.ico}' /usr/local/www/mediawiki
  • scp -pr 'otherhost:/usr/local/www/mediawiki/extensions/{CheckUser,Cite,ConfirmEdit,Nuke}' /usr/local/www/mediawiki/extensions

Side note: In an attempt to increase security (though with a performance penalty), I've replaced the block of database variables in LocalSettings.php as well as the whole of AdminSettings.php with something like include("/path/to/db-vars.php");, with db-vars.php containing the block in question wrapped in <?php...?>. So I have to make sure to grab those files as well, and make sure they're outside of any website's document root, yet still readable by the nginx worker process (e.g. owner or group 'www') and backup scripts, but not anyone else.

  • Adjust nginx.conf appropriately (replacing my previous "location /" block):
        location / {
            index  index.php;
            rewrite ^/?wiki(/.*)?$ /index.php?title=$1 last;
            rewrite ^/*$ /index.php last;
        }

This config supports short URLs like /wiki/articlename.

  • Also in nginx.conf, replace root /usr/local/www/nginx; with root /usr/local/www/mediawiki;.
  • service nginx reload

Now test it by browsing the wiki.

Install rsync

  • portmaster net/rsync

Install procmail

  • portmaster mail/procmail

Install mutt

Mutt is an email client with an interface familiar to Elm users.

  • portmaster mail/mutt

Additional options I enabled: SIDEBAR_PATCH. Options I disabled: HTML, IDN, SASL, XML. These dependencies will be installed: db5, mime-support.

Install tt-rss

Tiny Tiny RSS is an RSS/Atom feed aggregator. You can use its own web-based feed reader or an external client like Tiny Reader for iOS.

  • portmaster www/tt-rss

Options I disabled: GD (no need for generating QR codes). These dependencies will be installed: php56-mysqli, php56-pcntl, php56-curl, php56-xmlrpc, php56-posix.

If you intend to have FEED_CRYPT_KEY defined in the tt-rss config, install php56-mcrypt:

  • unalias ls && unsetenv CLICOLOR_FORCE – This is so libmcrypt 'configure' won't choke on colorized 'ls' output.
  • portmaster security/php56-mcrypt – This will also install libmcrypt and libltdl.

If it were a new installation, I'd have to create the database, source cat /usr/local/www/tt-rss/schema/ttrss_schema_mysql.sql | mysql -uroot -pfoo to set up the tables, and then edit /usr/local/www/tt-rss/config.php. But since I already have the database, this is essentially an upgrade, so I need to treat it as such:

  • echo 'ttrssd_enable="YES"' >> /etc/rc.conf.
  • Add an entry for /var/log/ttrssd.log in /etc/newsyslog.conf In 2014, I had trouble getting log rotation to work; I think ttrssd must be shut down during the rotation. Is this fixed?
  • Install the clean-greader theme:
  • cd /usr/local/www/tt-rss
  • Edit config.php as needed to replicate my old config, but be sure to set SINGLE_USER_MODE in it.

Regardless of whether upgrading or installing anew, make sure to set up nginx as needed. Most online instructions I found are for when you use a dedicated hostname for your server, whereas I am wanting to just run it from an aliased URL. It took me a while to figure out what I needed to add to the appropriate server block. A working config is below. It assumes the root directory is not set at the server block level.

        location ~ ^/tt-rss/.*\.php$ {
            root /usr/local/www;
            fastcgi_param  SCRIPT_FILENAME  $request_filename;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            include        fastcgi_params;
        }

        location /tt-rss/ {
            root /usr/local/www;
            index index.php;
        }
  • service nginx reload
  • Visit the site. If it goes straight to the feed reader, no upgrades were needed. If you have trouble and keep getting "primary script unknown" errors, consult Martin Fjordvald's excellent blog post covering all the possibilities.
  • Edit config.php again and unset SINGLE_USER_MODE.
  • Visit the site and log in. All should be well.

Install SpamAssassin

Install gnupg1 armv6 patch

As of Nov. 2015, a patch is still needed for one of the dependencies to build on the BeagleBone. So, assuming that's still the case, create a new file, /usr/ports/security/gnupg1/files/patch-mpi_longlong.h, with the following content:

--- mpi/longlong.h.orig 2014-06-30 16:46:23 UTC
+++ mpi/longlong.h
@@ -184,8 +184,8 @@ extern UDItype __udiv_qrnnd ();
 #define add_ssaaaa(sh, sl, ah, al, bh, bl) \
   __asm__ ("adds %1, %4, %5\n"                                          \
           "adc  %0, %2, %3"                                            \
-          : "=r" ((USItype)(sh)),                                      \
-            "=&r" ((USItype)(sl))                                      \
+          : "=r" ((sh)),                                               \
+            "=&r" ((sl))                                               \
           : "%r" ((USItype)(ah)),                                      \
             "rI" ((USItype)(bh)),                                      \
             "%r" ((USItype)(al)),                                      \
@@ -193,8 +193,8 @@ extern UDItype __udiv_qrnnd ();
 #define sub_ddmmss(sh, sl, ah, al, bh, bl) \
   __asm__ ("subs %1, %4, %5\n"                                          \
           "sbc  %0, %2, %3"                                            \
-          : "=r" ((USItype)(sh)),                                      \
-            "=&r" ((USItype)(sl))                                      \
+          : "=r" ((sh)),                                               \
+            "=&r" ((sl))                                               \
           : "r" ((USItype)(ah)),                                       \
             "rI" ((USItype)(bh)),                                      \
             "r" ((USItype)(al)),                                       \
@@ -221,10 +221,10 @@ extern UDItype __udiv_qrnnd ();
           : "r0", "r1", "r2")
 #else
 #define umul_ppmm(xh, xl, a, b)                                         \
-  __asm__ ("%@ Inlined umul_ppmm\n"                                     \
-          "umull %r1, %r0, %r2, %r3"                                   \
-                  : "=&r" ((USItype)(xh)),                             \
-                    "=r" ((USItype)(xl))                               \
+  __asm__ (                                                             \
+          "umull %1, %0, %2, %3"                                       \
+                  : "=&r" ((xh)),                                      \
+                    "=r" ((xl))                                        \
                   : "r" ((USItype)(a)),                                \
                     "r" ((USItype)(b))                                 \
                   : "r0", "r1")

It will be automatically applied during the build.

Install sa-utils

I prefer to just install the mail/sa-utils port; it will install SpamAssassin as a dependency.

The sa-utils port adds a script: /usr/local/etc/periodic/daily/sa-utils. This script will run sa-update and restart spamd every day so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.

  • portmaster mail/sa-utils
    • When prompted for sa-utils, enable SACOMPILE. This will result in re2c being installed.
    • When prompted for spamassassin, I enabled: DCC, DKIM, RELAY_COUNTRY. I am not sure about the usefulness of PYZOR and RAZOR these days. Are they worth the overhead?
    • When prompted for the various Perl modules, I used all the default options.
    • When prompted for dcc, I disabled the DCC milter option and accepted the license.
  • echo 'spamd_enable="YES"' >> /etc/rc.conf

The spamassassin post-install message mentions the possibility of running spamd as a non-root user, but this user must have read/write access to users' ~/.spamassassin directories. I have not figured out how to best handle that, so I just run it as root.

GeoIP setup

  • Enabling RELAY_COUNTRY results in GeoIP being installed, so it's a good idea to add this to root's crontab via crontab -e:
# on the 8th day of every month, update the GeoIP databases
50 0 8 * *	/usr/local/bin/geoipupdate.sh > /dev/null 2>&1
  • Run /usr/local/bin/geoipupdate.sh once if you didn't do it after the GeoIP install.

sa-update setup

  • Assuming you enabled SACOMPILE, make sure this line in /usr/local/etc/mail/spamassassin/v320.pre is not commented out:
    loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
  • Put the flags sa-update needs in /etc/periodic.conf. Pick one:
    • Core rulesets: daily_sa_update_flags="-v --gpgkey 24F434CE --channel updates.spamassassin.org"
    • Core + "Sought" rulesets: daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org"
    • To use the "Sought" ruleset, you need to run fetch http://yerp.org/rules/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY
  • Test sa-utils: /usr/local/etc/periodic/daily/sa-utils
  • If it successfully fetches and compiles the rules and restarts spamd, then you can safely add daily_sa_quiet="yes" to /etc/periodic.conf so the verbose output isn't in your nightly emails.

Allow DCC traffic

DCC helps SpamAssassin to give bulk mail a higher score. This means legitimate mailing list posts will also be scored higher, so using it means you have to be vigilant about whitelisting or avoiding scanning mailing list traffic.

To enable DCC checking, assuming you enabled the DCC option when building the SpamAssassin port:

  • Make sure the appropriate line is uncommented in /usr/local/etc/mail/spamassassin/v310.pre.
  • Make sure UDP traffic is allowed in & out on port 6277. Assuming you set up the "workstation" IPFW firewall, this means:
    • Add 6277/udp to the firewall_myservices line in /etc/rc.conf.
    • Just to get it working for now, run ipfw add 3050 allow tcp from any to me dst-port 6277

See the DCC FAQ for more info on the firewall requirements.

Start and test it

Now you can start up spamd:

  • service sa-spamd start

Assuming you installed procmail, make sure your ~/.forward contains something like this:

"|exec /usr/local/bin/procmail || exit 75"

And make sure your ~/.procmailrc contains something like this:

:0fw: spamassassin.lock
* < 600000
|/usr/local/bin/spamc

Keep in mind when editing your .procmailrc that you want to avoid running spamassassin on administrative messages or mailing list traffic.

Now send yourself a test message from another host. The message should arrive in your inbox with X-Spam-* headers added. Check /var/log/maillog for errors.

Enable short-circuit rules

  • In /usr/local/etc/mail/spamassassin/v320.pre, uncomment loadplugin Mail::SpamAssassin::Plugin::Shortcircuit.
  • In /usr/local/etc/mail/spamassassin/local.cf, uncomment all the lines that begin with shortcircuit.
  • Create /usr/local/etc/mail/spamassassin/shortcircuit.cf, using the content at https://wiki.apache.org/spamassassin/ShortcircuitingRuleset.
  • service sa-spamd reload

Suggested spamc configuration

Create /usr/local/etc/mail/spamassassin/spamc.conf with the following content:

# max message size for scanning = 600k
-s 600000

# prefer IPv4
-4

This is for local users running spamc to send mail to spamd for scanning, like in the .procmailrc example above.

Suggestions for local.cf

Just a few other things I added to local.cf, affecting all scanned mail:

Add verbose headers

Add more X-Spam-* headers to explain more fully what tests were run and how the score was affected.

add_header      all Report _REPORT_

Allow users to define their own rules

allow_user_rules 1

This allows the processing of custom rules that users put in ~/.spamassassin/user_prefs. Obviously not something you want to do if your don't trust your users to write rules that don't bog down the system or cause mail to be lost.

Adjusting scores for mailing lists

header  FROM_MAILING_LIST       exists:List-Id
score   FROM_MAILING_LIST       -0.1

header  EXAMPLE_LIST        List-Id =~ /<[^.]+\.[^.]+\.example\.org>/
score   EXAMPLE_LIST        -5.0

Users can then further further adjust these scores in their ~/.spamassassin/user_prefs:

score FROM_MAILING_LIST -1.0
score EXAMPLE_LIST -100.0

Whitelist hosts

# maybe not ideal, but at one point I missed some legit eBay mail
whitelist_from_rcvd *.ebay.com ebay.com

Favor mail originating locally

# probably not spam if it originates here (default score 0)
score NO_RELAYS 0 -5 0 -5

# hosts appearing in Received: headers of legitimate bounces
# (bounces for mail that originated here)
# as per https://wiki.apache.org/spamassassin/VBounceRuleset
whitelist_bounce_relays foo.example.org

Distributed computing projects

This is a tale of failure. None of the projects supported by BOINC have native support for armv6 processors. This includes my longtime favorite, distributed.net. So it's not an option to run these on the BeagleBone Black right now.

Nevertheless, here are the notes I started taking when I tried to get something working:

I like to run the distributed.net client on all my machines, but it is not open-source, and there are no builds for ARMv6 on FreeBSD yet.

Ordinarily you can run the client through BOINC with the Moo! Wrapper, but this doesn't work either. Here's the general idea with BOINC, though:

Install BOINC and start the client:

  • portmaster net/boinc – this will install several dependencies, including Perl. In the 'make config' screens for those, I generally disable docs & examples, X11, NLS (for now), and IPv6 (for now). When installing Perl, I chose to disable 64bit_int because it says "on i386".
  • echo boinc_client_enable="YES" >> /etc/rc.conf
  • service boinc-client start — there's a bug in the port; it writes the wrong pid to the pidfile, so subsequent 'service' commands will fail
  • Create account on the BOINC project page you're interested in
  • Go to your account info on that page and click on Account Keys
  • Create ~boinc/account_whatever.xml as instructed. Put the account key (not weak key) in a file, e.g. ~boinc/whatever.key.
  • boinccmd --project_attach http://moowrap.net/ `cat ~boinc/whatever.key`
  • tail -f ~boinc/stdoutdae.txt — this is the log

Blast! Look what comes up in the log: This project doesn't support computers of type armv6-pc-freebsd

None of the projects I tried (Moo!, SETI@Home, Enigma@Home) are supported. So I went ahead and commented out the boinc_client_enable line in /etc/rc.conf and manually killed the boinc-client process.

I later filed a freebsd-armv6 client port request at distributed.net.