Adding a subdomain with Certbot

It’s relatively straightforward to have multiple domains and sub-domains use the same Certbot certificate when they all point to the same document root. Adding a sub-domain that points somewhere else is not as easy.

I wanted to add a beta sub-domain for testing a site rewrite. I could get the certificate to generate, but I couldn’t figure out how to modify the Apache config files for the beta. This is how I did it.

To figure out what should be done, I ran this code to expand the existing certificate.


sudo /opt/certbot/certbot-auto --installer apache --webroot -w /www/example -d example.com,www.example.com  --webroot -w /www/example_beta -d beta.example.com

To verify that it did what I wanted, I ran:


/opt/certbot/certbot-auto certificates

and got this:


Certificate Name: example.com
    Domains: example.com beta.example.com www.example.com
    Expiry Date: 2018-01-14 19:35:43+00:00 (VALID: 89 days)
    Certificate Path: /etc/letsencrypt/live/www.example.com/fullchain.pem
    Private Key Path: /etc/letsencrypt/live/www.example.com/privkey.pem

I was originally looking at the example.com file in the sites-available directory, but what I should have been looking at was in the Certbot generated files that end in -le-ssl.conf.


<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerName example.com
    ServerAlias www.example.com
    ServerAdmin root@example.com

    DocumentRoot /www/example

    CustomLog /var/log/apache2/example.com.access_log combined
    ErrorLog /var/log/apache2/example.com.error_log

    ErrorDocument 404 /missing.php
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/www.example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/www.example.com/chain.pem
</VirtualHost>

</IfModule>
<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerName beta.example.com
    ServerAdmin root@touringmachine.com

    DocumentRoot /www/example_beta

    CustomLog /var/log/apache2/example.com.access_log combined
    ErrorLog /var/log/apache2/example.com.error_log

    ErrorDocument 404 /missing.php
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/www.example.com/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.example.com/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/www.example.com/chain.pem
</VirtualHost>
</IfModule>

You need to run this code after changing the config files.


sudo service apache2 restart

Don’t forget to change your DNS record to add the sub-domain.

Odd Thing About Symlinks

I refactored a site recently and rather than having a bunch of files with identical headers and footers, I created an index.php file with conditionals to create the meta data and then include the content. To preserve the existing file structure, I just created symlinks to the index.php file. So I did something like this:


ln -s index.php oldname1.php
ln -s index.php oldname2.php

This works great and is pretty easy to implement. (Yes there are other ways to do it.)

I ran into a problem when I tried to create symlinks in some subdirectories to prevent the “You don’t have permission to access /include.php/ on this server.” error message. I turns out that you have to be in the subdirectory where you want the alias to reside. This works:


cd Guides
ln -s ../manuals/index.php index.php

This does not:

cd manuals
ln -s index.php Guides/index.php

Is the server alive?

I’ve been working on a server that has been going offline. I can tell it’s offline because I get an indicator in my mail program that shows that I can’t get mail. The first time it happened we called over to the data center and had them restart everything. That worked for a while but then it happened again. This time we went over to the data center and the server itself was fine. Replacing the switch that connected two servers made the problem go away. A few weeks later, the server went offline again. This time it looked like it was a DDOS. It has been fine ever since, but we wanted to have a better way of knowing that there is an issue than me happening to look at my email program and noticing that I wasn’t getting mail.

One way to see if it is alive is to ping the server every few minutes. If I get no response, then send a text to the owner and an email to me.

I have a couple of cron job on my server running under my crontab that send out notifications by text or email every minute. So I edited the crontab to add another job to its list.

To see the cron jobs, crontab -l -u myuserid. To edit type, crontab -e.


# m h  dom mon dow   command
*/1 * * * * /home/myuserid/reminder.sh
*/1 * * * * /home/myuserid/ping_remote_server.sh

Here’s the shell script I wrote to ping the server and send out notifications.


#!/bin/sh
ping -c 1  -q remoteservername.com  1>/dev/null 2>&1;
return_code=$?;

if [ $return_code -ne 0 ]; then
    #echo "Failure";
    mail -s "Server Down"  8055551212@mobile.mycingular.com myuserid@MyServer.com < /dev/null
fi

I ping it once (-c 1), quietly—since I don’t care about the result (-q) and send the result and any error messages to /dev/null. All I care about is whether it was successful. Every Linux command has an exit status code. You can access it with $?. I only care if the command ran successfully, so I check to see if it is not 0.

I don’t need any details in the text that I send, so I set the subject and grab the content from

Spinning up a new Virtual Machine

Most of my websites are low volume and so I host them on the same VPS at Linode. For a new project, I decided to put the websites on a separate VPS. I spent a day researching the current choices and you probably won’t go wrong with any of them. For me, it came down to either Linode (which I’ve been happy with) or Digital Ocean (which I’ve used for backups and helping my nephew learn programming). Since I don’t need a lot of space right now and I had a referral code, I decided to go with the $5/mo Digital Ocean plan.

Since I’m familiar with it, I installed Ubuntu 14.04.2 LTS with Apache, MySQL, and PHP. I have some customizations that I made to make it consistent with my current server.

Users, Permissions, and Groups

The first thing I did was log in as root and create a new user—me and add myself to the sudoer’s table.


   adduser myusername

There are a couple of ways to do this but for now I just added my user name directly instead of creating a sudoers group like I normally do.


    myusername ALL=(ALL:ALL) ALL

Without logging out of the root account, I logged in with my username and edited the /etc/ssh/sshd_config. This was a test to see whether I could log in as myself and that I could edit files owned by root using sudo. I then changed PermitRootLogin to no


    # Authentication:
    PermitRootLogin no

To get the changes to take effect, I restarted the SSH daemon with sudo service ssh restart.

I like to have a group that is able to edit all of the files in www. I call this different things on different machines, e.g. ‘staff’, ‘web-admin’, ‘www’. On this machine I’m using ‘www’.

If you type the command groups, you can see which groups you belong to. To add a new group you can edit the groups file or use these commands to add a group and add a member to a group.


    sudo groupadd www
    sudo usermod -a -G www myusername

Fail2ban

I don’t know how they find random IP addresses to attach, but 7 minutes after installing Fail2ban it banned the first site. Installation is straightforward. I didn’t make any customizations except to add my IP address and change the email address for notifications.


    sudo apt-get install fail2ban
    sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
    sudo vi /etc/fail2ban/jail.local
    sudo service fail2ban restart
    sudo iptables -L

Updating Apache

First, I updated my .profile as I described in an earlier post. Then I changed the default location for my web pages from /var/www/html to /srv/www. First I created a www directory in srv. Then made a symlink to it in root.

I want everyone in the web admin group to be able to edit the files so I ran sudo chown myusername:www www

I edited the /etc/apache2/apache2.conf file to change the default location to /www and prevent directory listing—I removed Indexes from Options.


    <Directory /www/*>
        Options FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>

I don’t want people to see the contents of my .inc files. Some people add the suffix .php to them to hide them, but I prefer to make them all invisible to browsers.


# We don't want people to see .inc files
<Files  ~ "\.inc$">
  Order allow,deny
  Deny from all
</Files>

I also don’t want people to see the Subversion files from my WordPress installs and backups from old projects that used Subversion (nowadays I use git).


Order deny,allow
Deny from all

And I don’t want anyone to see .git files, although best practice says don’t put them in document root.


Order deny,allow
Deny from all

Since I don’t have a domain name attached to this IP address, I added these lines to the bottom of the conf file.


    # Suppress the warning message when restarting Apache until we get a FQDN
    ServerName localhost

I have a couple of templates for websites so I put one in the www directory for testing. Then I changed the DocumentRoot in 000-default.conf to that directory and restarted Apache.

Miscellaneous

I don’t plan to use a database for these websites, but I decided to set up MySQL and PhpMyAdmin. I followed the instructions at Digital Ocean to change the root password and add myself as a user.

phpMyAdmin install is straightforward, except that you need to add this line:


Include /etc/phpmyadmin/apache.conf

to the end of your /etc/apache2/apache2.conf and restart Apache—even if you are running the latest version of Ubuntu LTS.

Then I installed git using the command sudo apt-get install git-core.

Finally, I often convert MySQL databases to SQLite databases for use in Apple Apps so I installed SQLite.


    apt-get install php5-sqlite
    sudo apt-get install sqlite3