More Re-negotiation error in Apache logs

I was worried that updating my page to only accept secure https connections might lock out some customers who are still using Windows XP and old IE browsers. I was a bit worried when,
after updating my SSL ciphers I am still getting errors like this:


SSL Library Error: error:14080152:SSL routines:SSL3_ACCEPT:unsafe legacy renegotiation disabled
[client 180.76.15.31:51166] AH02225: Re-negotiation request failed
[client 134.249.131.0:51259] AH02225: Re-negotiation request failed
[client 134.249.131.0:51123] AH02225: Re-negotiation request failed
[client 1.39.57.169:37633] AH02225: Re-negotiation request failed

However, looking up the first ip with whois yields a netname of Baidu, the next two are located in the Ukraine, and one from India. There are a whole bunch of these, so I’m guessing it’s some spammers looking for forms that they can fill in with links. I just had 643 catalog requests a few days ago that defeated my rudimentary spam checking tool, so that’s what I’m going with for now.

Converting another site to https

I have another site that is fairly small and uses a responsive design template. I decided to convert that site to use https with certbot which is EFF’s Client for Let’s Encrypt Let’s Encrypt.

I had previously played around with using a self-signed certificate so I went to /etc/ssl and removed all of the keys for the site. Then I followed the directions on the certbot site to let it guide me through the process. I noticed when I removed the old certificates that the directories were owned by root and that I couldn’t cd into the directories without first changing to root. Thinking that certbot would put the new certificates in the same place (which it doesn’t) I decided to run the commands as root. So after downloading certbot-auto and changing the permissions, I changed to root by using ‘sudo su’. The bot is really smart about looking at your system and figuring out which domains you are hosting and which already have certificates. It appears that it looks at the sites-available files and if it finds a section for https (set off with <VirtualHost *:443> ... </VirtualHost> it assumes that there is a certificate already in place. So the domain I was targeting didn’t show up in the list. Removing that section from the file and running certbot again found the site.

Certbot doesn’t use that section so after the installation it added these lines to the sites-available file.


RewriteEngine on
RewriteCond %{SERVER_NAME} =wellgolly.com [OR]
RewriteCond %{SERVER_NAME} =www.wellgolly.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]

I did not have to restart Apache. When I visited the site I noticed that it was not displaying properly. The cause was http links in the header for fonts and bootstrap. Changing all of the links to https fixed the display problems.

The next thing was to look for all of the internal links that needed to be changed to https. I used the link checker at W3.org to check for links that I missed. Finally I went to SSL Labs and tested the site. It got an A+.

Unlike the other certificates which are stored in /etc/ssl, certbot puts all of its stuff in /etc/letsencrypt/live/.

Also, rather than adding the section with the https port (443) to your vhost file, it creates a separate one for https. It uses the same name and appends ‘-le-ssl.conf’.

My site is fairly straightforward, but you may need to add things to this file if you have a more complicated setup.

If you have more than one domain that points to the same site, you can use the same certificate. If www.myreallylongdomainname.com points to www.mydomain.com then after you set up the certificate for one domain, you can use the same certificate for the others. Unlike when setting up the original certificate, you do need to restart Apache for the changes to take effect.


sudo ./certbot-auto certonly --cert-name mydomain.com -d mail.mydomain.com,www.mydomain.com,www.mydomain.com,myreallylongdomainname.com,www.myreallylongdomainname.com
service apache2 restart

IF you have lots of sites on the same server you can see which ones you have certificates for by running the command:


sudo ./certbot-auto certificates

You will get an email when the certificate is set to expire. CD to the directory where certbot-auto is located and run


sudo ./certbot-auto renew

You can also check with an entry to a cron job. I set up one to run monthly. It looks like this:


0 0 1 * * /home/userid/certbot-auto renew -q

Processing Remove Requests

I stumble upon an easier way to filter requests for removal from our email list and add them to our remove table in the database.

It turns out that in Apple Mail, if you select a bunch of emails and then click on the Forward button, it selects the content of each email and pastes it into a new email. For our remove folder, the process took a while, but the end result is something like this:
Remove_Email

There is a whole bunch of stuff I don’t need, but Mail nicely highlights the From name and address.

I selected the text and pasted it into BBEdit. Then I used the process lines feature to extract all lines starting with From: (case sensitive). There were a couple with Well Golly in them, but otherwise it was a pretty clean result, with one address per line, name first and email addresses in <> e.g.


From: DarrleneBarrett@aol.com
From: John Sollino <johnc1459@hotmail.com>
From: "Jane Alexander" <jalexander156@hotmail.com>
From: "Strand, Bill, Ph.D." <Strand.Bill@commoncore.edu>

From here it is easy to massage this into a form that can be imported into the database.

Re-negotiation error in Apache logs

After refactoring a site and implementing https for all pages on it, I started looking closely at the logs. I was getting lots of error messages with things like, ‘routines:SSL3_ACCEPT:unsafe legacy renegotiation’ and ‘Re-negotiation failed’, so I started looking into it. I was also vaguely aware of BEAST and RC4 weaknesses so I wanted to secure the Apache server as much as possible as well.

The first thing I found was a reference to the Mozilla Server Side TLS Config Generator. It gives a very long list of ciphers that are appropriate for your web server and client needs.

It also suggests using mod_headers to implement HSTS, which according to Wikipedia, “HTTP Strict Transport Security (HSTS) is a web security policy mechanism which helps to protect websites against protocol downgrade attacks and cookie hijacking.”

I didn’t see headers in my mods-available list and looking at the output of phpinfo();, it does not appear to have been implemented. To install mod_headers on Ubuntu you just need to run a simple command.


sudo a2enmod headers
sudo service apache2 restart

Now my mods look like this:
Apache mods

My old SSLCipherSuite was very short,
SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:!SSLv2:+EXP:+eNULL
The new one is a monster


SSLCipherSuite          ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

I have no idea what most of these are, but I’m sure the good folks at Mozilla do.

The last thing they recommend is that you implement OCSP Stapling. The details are complicated, but it basically speeds up the verification of the certificate.

After adding the new lines in the appropriate place in my sites-available file for the site, I restarted Apache and everything is running fine. In the fifteen minutes it took to write this up, I have had no negotiation messages in the error log.

Once you have implemented the changes, test your site at SSLabs. I got an A+.

Odd Thing About Symlinks

I refactored a site recently and rather than having a bunch of files with identical headers and footers, I created an index.php file with conditionals to create the meta data and then include the content. To preserve the existing file structure, I just created symlinks to the index.php file. So I did something like this:


ln -s index.php oldname1.php
ln -s index.php oldname2.php

This works great and is pretty easy to implement. (Yes there are other ways to do it.)

I ran into a problem when I tried to create symlinks in some subdirectories to prevent the “You don’t have permission to access /include.php/ on this server.” error message. I turns out that you have to be in the subdirectory where you want the alias to reside. This works:


cd Guides
ln -s ../manuals/index.php index.php

This does not:

cd manuals
ln -s index.php Guides/index.php