User:Mjb/FreeBSD on BeagleBone Black/Additional software
This is a continuation of my FreeBSD on BeagleBone Black notes. Any questions/comments, email me directly at root (at) skew.org.
Some of my echo
commands require support for \n
, e.g. by setting setenv ECHO_STYLE both
in tcsh.
Contents
- 1 Ports and packages management
- 2 Conveniences
- 3 OpenSSL config
- 4 Replacement services
- 5 Install MySQL
- 6 Install nginx
- 7 Install PHP
- 8 Mitigate listen queue overflows
- 9 Install MediaWiki
- 10 Install rsync
- 11 Install procmail
- 12 Install mutt
- 13 Install tt-rss
- 14 Install SpamAssassin
- 15 Install Icecast
- 16 Distributed computing projects
Ports and packages management
Useful portmaster flags
Some of the most useful flags for portmaster:
-d
will make it delete old distfiles after each port is installed, rather than asking you about it. (-D
would make it keep them.)-b
will make it keep the backup it made when installing the previous version of a port. It usually deletes the backup after successfully installing a new version.-x pattern
will make it exclude ports (including dependencies) that match the glob pattern.--update-if-newer
will prevent rebuilding/reinstalling ports that don't need it. But for some reason, you have to specify more than one port on the command-line for this to work.-r portname
will rebuild portname and all ports that depend on it. This is for when sub-dependencies have been updated. For example, icecast2 requires libxslt, which requires libgcrypt. If you just tell portmaster to update or rebuild icecast2, it won't rebuild an already-up-to-date libxslt just to pick up a new version of libgcrypt. So to get the new libgcrypt into libxslt, you need to runportmaster -r libgcrypt
.
Here's an example (to update Perl modules, and Perl if needed):
portmaster -b -d --update-if-newer --packages p5-
If you're going to be using some of these flags all the time, just put them in your /usr/local/etc/portmaster.rc (see the portmaster.rc.sample there for a template). Mine has these lines uncommented:
BACKUP=bopt DONT_SCRUB_DISTFILES=Dopt SAVE_SHARED=wopt PM_LOG=/var/log/portmaster.log
List installed packages
pkg info
– names & descriptionspkg info -aoq | sort
– categories & names
Show dependencies
pkg info -r foo
– lists packages with runtime dependencies on the foo package.pkg info -d foo
– lists packages which foo depends on at runtime (non-recursive).
Only runtime dependencies are tracked by the package database. Build dependency info is in the ports collection.
To see what ports are needed to build, test, package, or run foo:
cd /usr/ports/`pkg info -oq foo` && make all-depends-list && cd -
There's no way to easily see a complete list of just the build dependencies. You can use build-depends-list
instead of all-depends-list
, but it will not search the dependencies recursively.
The hardest question to answer is "do I need foo for any of my installed packages?". For this you need to see if foo is in each package's all-depends-list. This will take a lot of time. Here's a script which will do it. I call it whatneeds
, as in whatneeds openssl
(my creation, CC0 license):
#!/bin/sh [ ! "$1" ] && echo "Usage: $0 portname" && exit 99 port=`pkg info -oq $1` [ ! "$port" ] && echo "$1 doesn't seem to be a port." && exit 99 sp='/-\|' echo "By default, $1 is required by these ports (or a dependency thereof):" pkg info -aoq | sort | while read x do printf '\b%.1s' "$sp" sp=${sp#?}${sp%???} [ -d "/usr/ports/$x" ] && cd "/usr/ports/$x" || echo -e "\b[could not check $x]" make all-depends-list 2> /dev/null | fgrep -q "$port" && echo -e "\b$x" done echo -e "\b\c"
Check integrity of installed packages
portmaster
portmaster -v --check-port-dbdir
— offers to delete saved options for ports no longer installedportmaster -v --check-depends
— makes sure installed ports' dependency info is consistent
pkg check
pkg check -d -n
– checks package manifests for .so files, and reports if any are missing or don't pass cursory checks for validity. Those which aren't fully valid are reported as missing but are usually fine, and you can't do anything about their validity anyway, so this command is rather useless at the moment.
pkg_libchk
If you install the sysutils/bsdadminscripts
port, you can run pkg_libchk
to check for missing libraries. It even tells you which packages are affected.
libchk
If you install the sysutils/libchk
port (which requires Ruby, which is huge), you can run libchk
to check for missing libraries, check for unused libraries, and see exactly which binaries use each library. To figure out which port installed the file needing the library, you need to run pkg info -W /path/to/the/file
.
See which installed packages could be updated
Always rebuild all installed packages as soon as possible after updating the OS.
At any other time:
pkg audit
will tell you which installed packages have security vulnerabilities.pkg version -P -v -l "<"
will tell you what installed packages could be upgraded from the ports collection. It's slow.pkg version -v -l "<"
will tell you what installed packages could be upgraded from the packages collection. It's fast.
The upgrade info is based on the info in /usr/ports, not by seeing what's new online.
Some ports will just have a portrevision bump due to changes in the port's Makefile. These are usually unimportant and not worth the pain of rebuilding and reinstalling.
See what has changed in a particular port
To see what's new in a port, I typically just visit FreshPorts in a web browser. For example, https://www.freshports.org/mail/spamassassin has everything you could want to know about mail/spamassassin, including the commit history, which should tell you what's new in the port itself, and often this will include some info about the software that the port installs. Sometimes that's not enough, and you have to also go look at the software's changelog on some other website for details.
Is there a better way?
Conveniences
Install nano
I prefer to use a 'visual' text editor with familiar command keys, multi-line cut & paste, and regex search & replace. I never got the hang of the classic editor vi, I find emacs too complicated, and ee is too limited. I used pico for many years, and now use nano, which is essentially a pico clone with more features.
portmaster -D
so it doesn't prompt me at the end about keeping the distfiles.portmaster editors/nano
See my nano configuration files document for configuration info.
Install Perl libwww
I like to use the HEAD
and GET
commands from time to time, to diagnose HTTP problems. These are part of Perl's libwww module, which is installed by other ports like Spamassassin. Those commands are nice to have anyway, so I like to install them right away:
portmaster www/p5-libwww
This will install a bunch of other Perl modules as dependencies.
Build 'locate' database
Why wait for this to run on Sunday night? Do it now so the locate
command will work:
/etc/periodic/weekly/310.locate
OpenSSL config
The base system comes with OpenSSL libraries installed, as well as the command-line tool /usr/bin/openssl
. Its configuration file is expected to be in /etc/ssl/openssl.cnf
.
At some point it's likely you'll end up with the security/openssl port installed as well (e.g. by installing OpenSMTPD or by enabling HTTPS support when installing nginx). This results in a /usr/local/bin/openssl
being installed, and its config file is /usr/local/openssl/openssl.cnf
, which is not created by default. You can create it yourself:
cd /usr/local/openssl && cp openssl.cnf.sample openssl.cnf
Of course, you could create a symlink if you want both to share the same config file.
You may want to add a line to /etc/make.conf to ensure that other ports are built against the OpenSSL port rather than the libs in the base system.
OpenSSL port options
When building OpenSSL from the ports collection, I disabled these 'make config' options:
- SSE2 – it is a feature of Intel CPUs (Pentium 4 & newer) only
- SSLv2 – no longer secure
- SSLv3 – no longer secure
- MD2 – no longer secure
I enabled these options:
- RC5 – patent issues are not a concern for me
- DOCS – I generally want documentation for everything I install
I left these enabled as well:
- SHARED
- THREADS
- SCTP – a new protocol that's sort of a cross between UDP & TCP, not widely used
Replacement services
Install OpenNTPD
Instead of the stock ntpd, I prefer OpenNTPD because it's slightly easier to configure and will be safer to update. (I also was perhaps a bit overly paranoid about the stock ntpd's requirement of always listening to UDP port 123.)
portmaster net/openntpd
- In /etc/rc.conf:
ntpd_enable="NO" openntpd_enable="YES" openntpd_flags="-s"
If you like, you can use /usr/local/etc/ntpd.conf as-is; it just says to use a random selection from pool.ntp.org, and to not listen on port 123 (it'll use random, temporary high-numbered ports instead).
Logging is same as for the stock ntpd.
service ntpd stop
(obviously not necessary if you weren't running the stock ntpd before)service openntpd start
You can tail the log to see what it's doing. You should see messages about valid and invalid peers, something like this:
ntp engine ready set local clock to Mon Feb 17 11:44:06 MST 2014 (offset 0.002539s) peer x.x.x.x now valid adjusting local clock by -0.046633s
Because of the issues with Unbound needing accurate time before it can resolve anything, I am going to experiment with putting time.nist.gov's IP address in /etc/hosts as the local alias 'timenistgov':
timenistgov 128.138.141.172
...and then have that be the first server checked in /usr/local/etc/ntpd.conf:
server timenistgov servers pool.ntp.org
The hope is that the IP address will suffice when DNS is failing!
Later, I can set up a script to try to keep the timenistgov entry in /etc/hosts up-to-date. Of course, this will not help if they ever change the IP address while the BBB is offline.
Install OpenSMTPD
The snapshots for the BBB come with the Sendmail daemon disabled in /etc/rc.conf, so immediately some emails (from root to root) start plugging up the queue, as you can see in /var/log/maillog.
This is what's in /etc/rc.conf:
sendmail_enable="NONE" sendmail_submit_enable="NO" sendmail_outbound_enable="NO" sendmail_msp_queue_enable="NO"
Something interesting: from the messages in /var/log/maillog about missing /etc/mail/certs, it looks like the client supports STARTTLS without having to be custom-built with SASL2 like I had to do in FreeBSD 8. Not sure what's up with that.
Rather than enabling Sendmail, I am going to try OpenSMTPD now.
portmaster mail/opensmtpd
– also installs various dependencies, including OpenSSLecho smtpd_enable="YES" >> /etc/rc.conf
cp /usr/local/etc/mail/smtpd.conf.sample /usr/local/etc/mail/smtpd.conf
Edit smtpd.conf to your liking. You probably want the following, at the very least (and replace example.org
with your domain, or comment out that line if you're not accepting mail from outside the BBB):
# This is the smtpd server system-wide configuration file. # See smtpd.conf(5) for more information. # To accept external mail, replace with: listen on all listen on 127.0.0.1 listen on ::1 # If you edit the file, you have to run "smtpctl update table aliases" table aliases file:/usr/local/etc/mail/aliases # If 'from local' is omitted, it is assumed accept from any for domain "example.org" alias <aliases> deliver to mbox accept for local alias <aliases> deliver to mbox accept for any relay
Then start the service:
service smtpd start
OpenSMTPD problems
The service first runs smtpd -n
to do a sanity check on the smtpd.conf file. I noticed two problems with this:
1. If hostname
returns a non-FQDN which is not resolvable (e.g. the default, "beaglebone", and this name is not mentioned in /etc/hosts), then smtpd may fail at first with a strange error message:
Performing sanity check on smtpd configuration: invalid hostname: getaddrinfo() failed: hostname nor servname provided, or not known /usr/local/etc/rc.d/smtpd: WARNING: failed precmd routine for smtpd
To work around this, make sure the result of running hostname
is a FQDN like "beaglebone.example.org.", which requires modifying the hostname line in /etc/rc.conf, or just add the unqualified hostname as another alias for localhost in /etc/hosts, which is a good idea to do anyway.
2. smtpd consumes all available memory for over 20 minutes if I leave the table aliases
line in the config. It eventually works, but not until after other essential services have failed due to the memory churn. This issue is also affecting runs of makemap
and the nightly run of smtpdctl
by /etc/periodic/daily/500.queuerun. For the latter, I get numerous "swap_pager_getswapspace(16): failed" messages interspersed with "pid 41060 (smtpctl), uid 0, was killed: out of swap space".
In late October 2015, I reported both issues:
- re: issue 1, added comments to a bug report of similar behavior on a Raspberry Pi 2
- re: issue 2, submitted a new bug report
Install MySQL
portmaster databases/mysql56-server
This will install mysql56-client, cmake, perl, and libedit. cmake has many dependencies, including Python (py-sphinx), curl, expat, jsoncpp, and libarchive. Depending on whether you've got Perl and Python already (and up-to-date), this will take roughly 3 to 6 hours.
MySQL is a bit of a RAM hog. On a lightly loaded system, it should do OK, though. Just make sure you have swap space!
- Follow the directions to create swap space, if you haven't already.
Secure and start it
Ensure the server won't be accessible to the outside world, enable it, and start it up:
echo '[mysqld]\nbind-address=127.0.0.1\ntmpdir=/var/tmp' > /var/db/mysql/my.cnf
echo 'mysql_enable="YES"' >> /etc/rc.conf
service mysql-server start
– this may take a minute, as it will have to use a little bit of swap.
If you are not restoring data from a backup (see next subsection), do the following to delete the test databases and set the passwords (yes, plural!) for the root account:
- Refer to Securing the Initial MySQL Accounts.
mysql -uroot
DELETE FROM mysql.db WHERE Db='test';
DELETE FROM mysql.db WHERE Db='test\_%';
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('foo');
– change foo to the actual password you wantSET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('foo');
– use the same passwordSET PASSWORD FOR 'root'@'::1' = PASSWORD('foo');
– use the same passwordSELECT User, Host, Password FROM mysql.user WHERE user='root';
– see what other hosts have an empty root password, and either set a password or delete those rows. For example:DELETE FROM mysql.user WHERE Host='localhost.localdomain';
\q
mysqladmin -uroot -pfoo
– This is to make sure the password works and mysqld is alive.
(If you were to just do mysqladmin password foo
, it would only set the password for 'root'@'localhost'.)
Restore data from backup
On my other server, every day, I ran a script to create a backup of my MySQL databases. To mirror the data here, I can copy the resulting .sql file (bzip2'd), which can be piped right into the client on this machine to populate the database here:
bzcat mysql-backup-20151022.sql.bz2 | mysql -uroot -pfoo
— foo is the root password, of course
The backed up data includes the mysql.db and mysql.user tables, thus includes all databases and account & password data from the other server. Obviously there is some risk if the other server has insecure accounts and test databases.
After loading from backup, I recommend also performing any housekeeping needed to ensure the tables are compatible with this server:
mysql_upgrade -pfoo --force
service mysql-server restart
Install nginx
I'm a longtime Apache httpd administrator (even co-ran apache.org for a while) but am going to see if nginx will work just as well for what I need:
- HTTPS with SNI (virtual host) and HSTS header support
- URL rewriting and aliasing
- PHP (to support MediaWiki)
- basic authentication
- server-parsed HTML (for timestamp comments, syntax coloring)
- fancy directory indexes (custom comments, but I can live without)
Let's get started:
portmaster -D www/nginx
– installs PCRE as well- Modules I left enabled: DSO, IPV6, HTTP, HTTP_CACHE, HTTP_REWRITE, HTTP_SSL, HTTP_STATUS, WWW
- HTTP/2-related modules I left enabled: HTTP_SLICE, HTTPV2, STREAM, STREAM_SSL
- Modules I also enabled: HTTP_FANCYINDEX
echo 'nginx_enable="YES"' >> /etc/rc.conf
Try it out:
service nginx start
- Visit your IP address in a browser (just via regular HTTP). You should get a "Welcome to nginx!" page.
Immediately I'm struck by how lightweight it is: processes under 14 MB instead of Apache's ~90 MB.
Enable HTTPS service
Prep for HTTPS support (if you haven't already done this):
- Put your private key (.key) and cert (.crt or .pem) somewhere.
- Create a 2048-bit Diffie-Hellman group:
openssl dhparam -out /etc/ssl/dhparams.pem 2048
Enable HTTPS support by putting this in /usr/local/etc/nginx/nginx.conf for each HTTPS server:
server { listen 443 ssl; server_name localhost; root /usr/local/www/nginx; ssl_certificate /path/to/your/cert; ssl_certificate_key /path/to/your/server_key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_protocols TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_dhparam /etc/ssl/dhparams.pem; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;" always; gzip off; location / { index index.html index.htm; } }
This config includes HSTS support; "perfect" forward secrecy (PFS); and mitigation of the POODLE, CRIME, BEAST, and BREACH attacks. (CRIME attack mitigation is assumed because OpenSSL is built without zlib compression capability by default now.)
Unlike Apache, nginx does not support separate files for your certificate chain. The cert file used by nginx should contain not just your site's cert, but also any other certs that you don't expect clients (browsers) to trust, e.g. any intermediate certs, appended in order after your cert. Otherwise, some clients will complain or will consider your cert to be self-signed.
If you like, you can redirect HTTP to HTTPS:
server { listen 80; server_name localhost; root /usr/local/www/nginx; return 301 https://$host$request_uri; }
service nginx reload
- Check the site again, but this time via HTTPS. Once you verify it's working, you can tweak the config as you like.
If your server is publicly accessible, test it via the SSL Server Test by Qualys SSL Labs.
Handle temporarily offline sites
If a website needs to be taken down temporarily, e.g. for website backups, you can configure nginx to respond with a HTTP code 503 ("service temporarily unavailable") any time your backup script creates a file named ".offline" in the document root:
location / { if (-f '$document_root/.offline') { return 503; } ... }
The backup script needs to remove the file when it's done, of course.
Alternatively, you can customize the 503 response page. Just make sure a sub-request for that custom page won't be caught by the "if":
location /.website_offline.html { try_files $uri =503; internal; } location / { if (-f '$document_root/.offline') { error_page 503 /.website_offline.html; return 503; } ... }
Here is my custom 503 page for when my wiki is offline:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>wiki offline temporarily</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <div style="float: left"> <h1>Don't panic.</h1> <p>The wiki is temporarily offline.</p> <p>It might be for a daily backup, in which case it should be online within 15 minutes.</p> </div> <div> <!-- get your own: http://theoatmeal.com/comics/state_web_summer#tumblr --> <img style="float: right; width: 400px" src="//skew.org/oatmeal_tumbeasts/tbrun1.png" alt="[Tumbeast illustration by Matthew Inman (theoatmeal.com); license: CC-BY-3.0]"> </div> </body> </html>
nginx quirks
As compared to Apache, nginx has some quirks.
- There is no support for .htaccess files, nor anything similar; only root controls how content is served.
- In the config file, there is no way to toggle or test for modules.
- There is nothing like AddHandler and Action; a content processor must be accessed via its own FastCGI server.
- Server-side includes are rudimentary and do not include my longtime favorite instruction
#flastmod
. - Fancy directory indexes can have custom headers & footers, but the index itself cannot be customized (no adding descriptions or setting the width).
Root
andAlias
are inherited but cannot be overridden. You have to get clever with nestedLocation
directives.- Intermediate TLS certificates must be in the same file as the server certificate.
- The
types
directive must be a complete list of MIME type mappings. You can'tinclude mime.types
and add to it viatypes
directives. - New log files are created mode 644 (
rw-r--r--
), owned by the user the worker processes run as ('www'). Their containing directories must be owned by the same user; it is not enough that they just be writable by that user via group permissions.
The Location
directives are matched against the normalized request URI string in this order:
Location = string
- longest matching
Location ^~ prefix
- first matching
Location ~ regex
orLocation ~* case-insensitive-regex
- longest matching
Location prefix
I have seen #3 and #4 get mixed up, though.
Install PHP
portmaster lang/php56
For use via nginx, make sure the FPM option is checked (it is by default). FPM is a FastCGI Process Manager. It runs a server on localhost port 9000 which handles, via a binary protocol, the launching of PHP processes as if they were CGI scripts.
Configure nginx to use PHP FPM
echo 'php_fpm_enable="YES"' >> /etc/rc.conf
service php-fpm start
Add to /usr/local/etc/nginx/nginx.conf:
location ~ [^/]\.php(/|$) { root /usr/local/www/nginx; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_intercept_errors on; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; }
service nginx reload
echo '<?php var_export($_SERVER)?>' > /usr/local/www/nginx/test.php
echo '<?php echo phpinfo(); ?>' > /usr/local/www/nginx/phpinfo.php
- In your browser, visit /test.php/foo/bar.php?v=1 and /phpinfo.php ... when confirmed working, move the test files to somewhere not publicly accessible.
Periodically delete expired PHP session data
If you run PHP-based websites for a while, you probably notice session data tends to get left behind. This is because PHP defaults to storing session data in /tmp or /var/tmp, and has a 1 in 1000 chance of running a garbage collector upon the creation of a new session. The garbage collector will expire ones that are more than php.ini's session.gc_maxlifetime (24 minutes by default). You can increase the probability of it running, but you still must wait for a new session to be created, so it's really only useful for sites which get a new session created every 24 minutes or less. Otherwise, you're better off (IMHO) just running a script to clean out the stale session files. So I use the script below, invoked from root's crontab every 20 minutes:
#!/bin/sh echo "Deleting the following stale sess_* files:" find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc) find /tmp /var/tmp -type f -name sess_\* -cmin +$(echo `/usr/local/bin/php -i | grep session.gc_maxlifetime | cut -d " " -f 3` / 60 | bc) -delete
Of course you can store session data in a database if you want, and the stale file problem is avoided altogether. But then that's just one more thing that can break.
Here's what I put in root's crontab, via crontab -e
:
# every hour, clear out the PHP session cache 10 * * * * /usr/local/adm/clean_up_php_sessions > /dev/null 2>&1
Upgrade PHP
PHP needs to be upgraded practically every month due to security holes. You will see when you run pkg audit
or read your nightly security reports.
portmaster php56
service php-fpm restart
Visit a PHP-based webpage to make sure it's working.
Mitigate listen queue overflows
I started getting some mysterious messages in my system logs like sonewconn: pcb 0xc2d1da00: Listen queue overflow: 16 already in queue awaiting acceptance (838 occurrences). A bit of searching revealed that it could be a denial-of-service attack, port scan, or just naturally heavy network loads (perhaps made worse by system slowdowns).
The common answer is that the kernel's default limit of 128 connections in the TCP listen queue (per port) is too low for busy servers, so you should bump up kern.ipc.somaxconn
to 1024 or more. This is what was recommended in the tuning kernel limits info in The FreeBSD Handbook. However, the current version of the handbook says that the correct setting to adjust is actually kern.ipc.soacceptqueue
. They are actually the same thing!
echo kern.ipc.soacceptqueue=1024 >> /etc/sysctl.conf
service sysctl start
The idea is that each server which listens on TCP ports has a maximum number of connections it accepts on each port. The server might set it to particular number, or it might just whatever the kern.ipc.somaxconn
value was at the time the server was started. You can see these limits with netstat -Lan
. Therefore after bumping up the kernel limit, you should restart all your servers which are still showing 128:
service php-fpm restart
service nginx restart
service local_unbound restart
sa-spamd (port 783) and sshd (port 22 or whatever) apparently use hard-coded limits of 128, so no need to restart them.
Now when you run netstat -Lan
you should see php-fpm (port 9000), nginx (ports 80 & 443), and unbound (port 53) all now use higher limits like 1024 or 256.
Install MediaWiki
unalias ls; unsetenv CLICOLOR_FORCE
– see below.portmaster www/mediawiki126
– or whatever the latest version is.
In the 'make config' step, only enable MySQL and xCache. Disable sockets; that feature is only used by memcached. Don't use pecl-APC because last I checked, you can't use it with PHP 5.6.
Other dependencies which will be installed: php56-zlib, php56-iconv, libiconv, php56-mbstring, oniguruma5, php56-mysql, php56-json, php56-readline, php56-hash, php56-ctype, php56-dom, php56-xml, php56-xmlreader, php56-session, and www/xcache.
The build of oniguruma5 will fail if 'ls' is configured to produce color output, hence the unalias & unsetenv commands. See my .cshrc for more info. Also, oniguruma5 won't install if you already have oniguruma4; if that happens, you have to run portmaster -o devel/oniguruma5 devel/oniguruma4
and rebuild php56-mbstring.
service php-fpm reload
cp /usr/local/share/examples/xcache/xcache.ini /usr/local/etc/php
- Edit /usr/local/etc/php/xcache.ini and set xcache.admin.user and xcache.admin.pass. Consider adding a password hint as a comment. Also consider dropping xcache.size down to something smaller than the default of 60M, maybe 16M to start.
I already have the database set up (restored from a backup of another installation), so instead of doing the in-place web install, I'll just copy my config, images and extensions from my other installation:
scp -pr 'otherhost:/usr/local/www/mediawiki/{AdminSettings.php,LocalSettings.php,images,robots.txt,favicon.ico}' /usr/local/www/mediawiki
scp -pr 'otherhost:/usr/local/www/mediawiki/extensions/{CheckUser,Cite,ConfirmEdit,Nuke}' /usr/local/www/mediawiki/extensions
Side note: In an attempt to increase security (though with a performance penalty), I've replaced the block of database variables in LocalSettings.php as well as the whole of AdminSettings.php with something like include("/path/to/db-vars.php");
, with db-vars.php containing the block in question wrapped in <?php
...?>
. So I have to make sure to grab those files as well, and make sure they're outside of any website's document root, yet still readable by the nginx worker process (e.g. owner or group 'www') and backup scripts, but not anyone else.
- Adjust nginx.conf appropriately (replacing my previous "location /" block):
location / { index index.php; rewrite ^/?wiki(/.*)?$ /index.php?title=$1 last; rewrite ^/*$ /index.php last; }
This config supports short URLs like /wiki/articlename.
- Also in nginx.conf, replace
root /usr/local/www/nginx;
withroot /usr/local/www/mediawiki;
. service nginx reload
Now test it by browsing the wiki.
Suggested nginx config
This is what I use in my nginx.conf:
server { listen 443 ssl; server_name offset.skew.org; root /usr/local/www/mediawiki; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;" always; # deny access to certain SEO bots looking for places to upload backlinks; # see http://blocklistpro.com/content-scrapers/ if ($http_user_agent ~* (AhrefsBot|SiteBot|XoviBot)) { return 403; } # allow access to custom 503 page configured in "location /" block location = /.wiki_offline.html { try_files $uri =503; internal; } # allow access to non-skin images; return 404 if not found location ^~ /resources/assets/ { try_files $uri =404; } location ^~ /images/ { try_files $uri =404; } # deny access to MediaWiki's internals location ^~ /cache/ { deny all; } location ^~ /docs/ { deny all; } location ^~ /extensions/ { deny all; } location ^~ /includes/ { deny all; } location ^~ /languages/ { deny all; } location ^~ /maintenance/ { deny all; } location ^~ /mw-config/ { deny all; } # comment out during installation location ^~ /resources/ { deny all; } location ^~ /serialized/ { deny all; } location ^~ /tests/ { deny all; } # deny access to core dumps location ~ ^.*\.core$ { deny all; } location / { # if .offline file exists, return custom 503 page if (-f '$document_root/.offline') { error_page 503 /.wiki_offline.html; return 503; } # if directory requested, pretend its index.php was requested index index.php; # short URL support assumes LocalSettings.php has # $wgScriptPath = ""; # $wgArticlePath = "/wiki/$1"; # if /wiki/foo requested, pretend it was /index.php?title=foo rewrite ^/?wiki(/.*)?$ /index.php?title=$1 last; # if anything nonexistent requested, pretend it was /index.php; try_files $uri /index.php; } # pass requests for existing .php scripts to PHP FPM location ~ [^/]\.php(/|$) { fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_intercept_errors on; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } }
Install rsync
portmaster net/rsync
Install procmail
portmaster mail/procmail
Install mutt
Mutt is an email client with an interface familiar to Elm users.
portmaster mail/mutt
Additional options I enabled: SIDEBAR_PATCH. Options I disabled: HTML, IDN, SASL, XML. These dependencies will be installed: db5, mime-support.
Install tt-rss
Tiny Tiny RSS is an RSS/Atom feed aggregator. You can use its own web-based feed reader or an external client like Tiny Reader for iOS.
portmaster www/tt-rss
Options I disabled: GD (no need for generating QR codes). These dependencies will be installed: php56-mysqli, php56-pcntl, php56-curl, php56-xmlrpc, php56-posix.
If you intend to have FEED_CRYPT_KEY defined in the tt-rss config, install php56-mcrypt:
unalias ls && unsetenv CLICOLOR_FORCE
– This is so libmcrypt 'configure' won't choke on colorized 'ls' output.portmaster security/php56-mcrypt
– This will also install libmcrypt and libltdl.
If it were a new installation, I'd have to create the database, source cat /usr/local/www/tt-rss/schema/ttrss_schema_mysql.sql | mysql -uroot -pfoo
to set up the tables, and then edit /usr/local/www/tt-rss/config.php. But since I already have the database, this is essentially an upgrade, so I need to treat it as such:
echo 'ttrssd_enable="YES"' >> /etc/rc.conf
.Add an entry for /var/log/ttrssd.log in /etc/newsyslog.confIn 2014, I had trouble getting log rotation to work; I think ttrssd must be shut down during the rotation. Is this fixed?- Install the clean-greader theme:
cd /usr/local/www/tt-rss/themes.local
fetch https://github.com/naeramarth7/clean-greader/archive/master.zip
unzip master.zip && rm master.zip
mv clean-greader-master clean-greader
cp clean-greader/clean-greader.css .
cd /usr/local/www/tt-rss
- Edit config.php as needed to replicate my old config, but be sure to set SINGLE_USER_MODE in it.
Regardless of whether upgrading or installing anew, make sure to set up nginx as needed. Most online instructions I found are for when you use a dedicated hostname for your server, whereas I run it from an aliased URL. A working config is below. It assumes the root directory is not set at the server block level, and it will serve up my custom 503 page (explained elsewhere) when the database is offline.
location ^~ /tt-rss/cache/ { deny all; } location ^~ /tt-rss/classes/ { deny all; } location ^~ /tt-rss/locale/ { deny all; } location ^~ /tt-rss/lock/ { deny all; } location ^~ /tt-rss/schema/ { deny all; } location ^~ /tt-rss/templates/ { deny all; } location ^~ /tt-rss/utils/ { deny all; } location = /tt-rss/.reader_offline.html { root /usr/local/www; try_files $uri =503; internal; } location ~ ^/tt-rss/.*\.php$ { root /usr/local/www; fastcgi_intercept_errors on; if (-f '$document_root/tt-rss/.offline') { error_page 503 /tt-rss/.reader_offline.html; return 503; } fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } location /tt-rss/ { root /usr/local/www; if (-f '$document_root/tt-rss/.offline') { error_page 503 /tt-rss/.reader_offline.html; return 503; } index index.php; }
service nginx reload
- Visit the site. If it goes straight to the feed reader, no upgrades were needed. If you have trouble and keep getting "primary script unknown" errors, consult Martin Fjordvald's excellent blog post covering all the possibilities.
- Edit config.php again and unset SINGLE_USER_MODE.
- Visit the site and log in. All should be well.
Install SpamAssassin
Install sa-utils
I prefer to just install the mail/sa-utils port; it will install SpamAssassin as a dependency.
The sa-utils port adds a script: /usr/local/etc/periodic/daily/sa-utils
. This script will run sa-update and restart spamd every day so you don't have to do it from a cron job. You get the output, if any, in your "daily" report by email.
portmaster mail/sa-utils
- When prompted for sa-utils, enable SACOMPILE. This will result in re2c being installed.
- When prompted for spamassassin, I enabled: DCC, DKIM, RELAY_COUNTRY. I am not sure about the usefulness of PYZOR and RAZOR these days. Are they worth the overhead?
- When prompted for the various Perl modules, I used all the default options.
- When prompted for dcc, I disabled the DCC milter option and accepted the license.
echo 'spamd_enable="YES"' >> /etc/rc.conf
The spamassassin post-install message mentions the possibility of running spamd as a non-root user, but this user must have read/write access to users' ~/.spamassassin directories. I have not figured out how to best handle that, so I just run it as root.
GeoIP setup
- Enabling RELAY_COUNTRY results in GeoIP being installed, so it's a good idea to add this to root's crontab via
crontab -e
:
# on the 8th day of every month, update the GeoIP databases 50 0 8 * * /usr/local/bin/geoipupdate.sh > /dev/null 2>&1
- Run
/usr/local/bin/geoipupdate.sh
once if you didn't do it after the GeoIP install.
sa-update setup
- Assuming you enabled SACOMPILE, make sure this line in
/usr/local/etc/mail/spamassassin/v320.pre
is not commented out:loadplugin Mail::SpamAssassin::Plugin::Rule2XSBody
- Put the flags sa-update needs in /etc/periodic.conf. Pick one:
- Core rulesets:
daily_sa_update_flags="-v --gpgkey 24F434CE --channel updates.spamassassin.org"
- Core + "Sought" rulesets:
daily_sa_update_flags="-v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --gpgkey 24F434CE --channel updates.spamassassin.org"
- To use the "Sought" ruleset, you need to run
fetch http://yerp.org/rules/GPG.KEY && sa-update --import GPG.KEY && rm GPG.KEY
- Core rulesets:
- Test sa-utils:
/usr/local/etc/periodic/daily/sa-utils
- If it successfully fetches and compiles the rules and restarts spamd, then you can safely add
daily_sa_quiet="yes"
to /etc/periodic.conf so the verbose output isn't in your nightly emails.
Allow DCC traffic
DCC helps SpamAssassin to give bulk mail a higher score. This means legitimate mailing list posts will also be scored higher, so using it means you have to be vigilant about whitelisting or avoiding scanning mailing list traffic.
To enable DCC checking, assuming you enabled the DCC option when building the SpamAssassin port:
- Make sure the appropriate line is uncommented in /usr/local/etc/mail/spamassassin/v310.pre.
- Make sure UDP traffic is allowed in & out on port 6277. Assuming you set up the "workstation" IPFW firewall, this means:
- Add
6277/udp
to thefirewall_myservices
line in /etc/rc.conf. - Just to get it working for now, run
ipfw add 3050 allow tcp from any to me dst-port 6277
- Add
See the DCC FAQ for more info on the firewall requirements.
Start and test it
Now you can start up spamd:
service sa-spamd start
Assuming you installed procmail, make sure your ~/.forward contains something like this:
"|exec /usr/local/bin/procmail || exit 75"
And make sure your ~/.procmailrc contains something like this:
:0fw: spamassassin.lock * < 600000 |/usr/local/bin/spamc
Keep in mind when editing your .procmailrc that you want to avoid running spamassassin on administrative messages or mailing list traffic.
Now send yourself a test message from another host. The message should arrive in your inbox with X-Spam-* headers added. Check /var/log/maillog for errors.
Enable short-circuit rules
- In /usr/local/etc/mail/spamassassin/v320.pre, uncomment
loadplugin Mail::SpamAssassin::Plugin::Shortcircuit
. - In /usr/local/etc/mail/spamassassin/local.cf, uncomment all the lines that begin with
shortcircuit
. - Create /usr/local/etc/mail/spamassassin/shortcircuit.cf, using the content at https://wiki.apache.org/spamassassin/ShortcircuitingRuleset.
- service sa-spamd reload
Suggested spamc configuration
Create /usr/local/etc/mail/spamassassin/spamc.conf with the following content:
# max message size for scanning = 600k -s 600000 # prefer IPv4 -4
This is for local users running spamc to send mail to spamd for scanning, like in the .procmailrc example above.
Suggestions for local.cf
Just a few other things I added to local.cf, affecting all scanned mail:
Add verbose headers
Add more X-Spam-* headers to explain more fully what tests were run and how the score was affected.
add_header all Report _REPORT_
Allow users to define their own rules
allow_user_rules 1
This allows the processing of custom rules that users put in ~/.spamassassin/user_prefs. Obviously not something you want to do if your don't trust your users to write rules that don't bog down the system or cause mail to be lost.
Adjusting scores for mailing lists
header FROM_MAILING_LIST exists:List-Id score FROM_MAILING_LIST -0.1 header EXAMPLE_LIST List-Id =~ /<[^.]+\.[^.]+\.example\.org>/ score EXAMPLE_LIST -5.0
Users can then further further adjust these scores in their ~/.spamassassin/user_prefs:
score FROM_MAILING_LIST -1.0 score EXAMPLE_LIST -100.0
Whitelist hosts
# maybe not ideal, but at one point I missed some legit eBay mail whitelist_from_rcvd *.ebay.com ebay.com
Favor mail originating locally
# probably not spam if it originates here (default score 0) score NO_RELAYS 0 -5 0 -5 # hosts appearing in Received: headers of legitimate bounces # (bounces for mail that originated here) # as per https://wiki.apache.org/spamassassin/VBounceRuleset whitelist_bounce_relays foo.example.org
Install Icecast
Clients (listeners) will connect to an Icecast server will connect to my Icecast server in order to listen to the SHOUTcast v1 stream (AAC or MP3) which I'll be generating elsewhere and transmitting to the server.
portmaster audio/icecast2
echo 'icecast_enable="YES"' >> /etc/rc.conf
cp /usr/local/etc/icecast.xml.sample /usr/local/etc/icecast.xml
- edit /usr/local/etc/icecast.xml. Change location, admin, passwords, hostname, listen-socket port. Uncomment shoutcast-mount, ssl-certificate and changeowner/user/group. If you use "/" as a mount point (e.g. in shoutcast-mount), comment out or change the alias for "/". Uncomment another listen-socket port and ssl "1" for admin purposes.
mkdir /var/log/icecast
chmod a+rwx /var/log/icecast
(The log directory must be writeable by the icecast process.)
Generate a combined private/public key pair for TLS encryption:
cd /usr/local/share/icecast
openssl genrsa -out icecast-private.key 2048
openssl req -sha256 -new -key icecast-private.key -out icecast-cert.csr -subj '/CN=foo.example.org' -nodes
(replace foo.example.org with the actual FQDN)openssl x509 -req -days 720 -in icecast-cert.csr -signkey icecast-private.key -out icecast-cert.crt
cat icecast-private.key icecast-cert.crt > icecast.pem
The resulting icecast.pem file must be readable by the icecast process. This key pair is sufficient to establish encryption, but web browsers will complain or prevent access because the certificate (the public key) is self-signed. So another option, if you already have a cert signed by a widely trusted CA, is to make icecast.pem be the concatenation of 1. the private key used to generate that cert, and 2. the full chain of certs, ending with the cert itself. Of course, if you do that, make sure it is only readable by root and the icecast process.
Allow traffic through the firewall:
- Assuming you set up a 'workstation'-type ipfw firewall, add appropriate TCP ports to
firewall_myservices
in /etc/rc.conf. service ipfw restart
sh /etc/ipfw.rules
(reload custom ipfw rules because the restart flushed them).
Start the server:
service icecast2 start
tail -f /var/log/icecast/error.log
– watch for any problems
Try connecting a source and a listener. Try visiting the server URL with the path /status.xsl.
Distributed computing projects
This is a tale of failure. None of the projects supported by BOINC have native support for armv6 processors. This includes my longtime favorite, distributed.net. So it's not an option to run these on the BeagleBone Black right now.
Nevertheless, here are the notes I started taking when I tried to get something working:
I like to run the distributed.net client on all my machines, but it is not open-source, and there are no builds for ARMv6 on FreeBSD yet.
Ordinarily you can run the client through BOINC with the Moo! Wrapper, but this doesn't work either. Here's the general idea with BOINC, though:
Install BOINC and start the client:
portmaster net/boinc
– this will install several dependencies, including Perl. In the 'make config' screens for those, I generally disable docs & examples, X11, NLS (for now), and IPv6 (for now). When installing Perl, I chose to disable 64bit_int because it says "on i386".echo boinc_client_enable="YES" >> /etc/rc.conf
service boinc-client start
— there's a bug in the port; it writes the wrong pid to the pidfile, so subsequent 'service' commands will fail
- Create account on the BOINC project page you're interested in
- Go to your account info on that page and click on Account Keys
- Create ~boinc/account_whatever.xml as instructed. Put the account key (not weak key) in a file, e.g. ~boinc/whatever.key.
boinccmd --project_attach http://moowrap.net/ `cat ~boinc/whatever.key`
tail -f ~boinc/stdoutdae.txt
— this is the log
Blast! Look what comes up in the log: This project doesn't support computers of type armv6-pc-freebsd
None of the projects I tried (Moo!, SETI@Home, Enigma@Home) are supported. So I went ahead and commented out the boinc_client_enable line in /etc/rc.conf and manually killed the boinc-client process.
I later filed a freebsd-armv6 client port request at distributed.net.