Installing a wildcard certificate on Linux using and a DNS Api

Let’s Encrypt is a certificate authority (CA) that offers free SSL/TLS certificates

Objective: To acquire and install a wildcard SSL/TLS certificate from to a GNU/Linux system with automatic renewal enabled by using a registrar’s DNS API to prove the ownership of the domain. In this case I’m using the Gandi LiveDNS API but the instructions work with other DNS providers with APIs too that have DNS plugins available.


sudo su
git clone
cd ./
./ --install

Get API key from Gandi

Go to and click on “security” and generate an API key and store it in a safe place and export it with

export GANDI_LIVEDNS_KEY="fdmlfsdklmfdkmqsdfkthiskeyisofcoursefake"

Generate the cert

Followed the official DNS API instructions at GitHub.

Now use the staging environment (–test) for the certificate issuing. This will save you on the issuing limits of production platform. --issue --test --log --dns dns_gandi_livedns --log -d *.domain.tld -d domain.tld

Notice that this will fail on the first run but succeed on the second one.

Once the –test finishes successfully you can switch to the production environment by deleting the /root/*.domain.tld-directory (it contains the staging server’s information and will be regenerated with the production server’s info on next run)

rm -rf /root/*.domain.tld

Now run the issuing command twice (it will fail on the first run) just changing –test to –force --issue --force --log --dns dns_gandi_livedns --log -d *.domain.tld -d domain.tld

Install the certificate in some sensible place as the directory structure of /root/ may change in the future.

Certificate deployment instructions for Apache at GitHub --install-cert -d *.domain.tld -d domain.tld \
--cert-file /etc/apache2/*.domain.tld/*.domain.tld.cer \
--key-file /etc/apache2/*.domain.tld/*.domain.tld.key \
--fullchain-file /etc/apache2/*.domain.tld/fullchain.cer \
--reloadcmd "service apache2 force-reload"

Edit Apache configuration to take the SSL/TLS protected site into use

Create a VirtualHost-directive for the SSL/TLS protected site

<VirtualHost *:443>
   SSLEngine on
   SSLCertificateKeyFile /etc/apache2/*.domain.tld/*.domain.tld.key
   SSLCertificateFile /etc/apache2/*.domain.tld/*.domain.tld.cer
   SSLCertificateChainFile /etc/apache2/*.domain.tld/fullchain.cer

Once you are sure that the HTTPS site works redirect requests from the http-site to the HTTPS site with URL rewriting.

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Enable forward secrecy in your Apache configuration

Enabling forward secrecy makes users of the site more secure. Instructions by SSLLabs research here at GitHub.

That’s it. The installation added a cronjob to run it daily and it will renew the certificate automatically when it is nearing the end of it’s validity period.


UPDATE 2018-07-16: If you need to use more than one API Key do as follows. This usually occurs when you are hosting sites for many different registrants.

Export the API key if this is the first time you are using that key. If you have already created certificates with this API key the will read it from the config file from the file /root/ --issue --config-home /root/ --log --dns dns_gandi_livedns --log -d *.domain.tld -d domain.tld

First run will fail. Run it again.

Create the target directory for certificate installation.

mkdir /etc/apache2/\*.domain.tld

Now install the certificate

./ --install-cert --config-home /root/ -d *.domain.tld -d domain.tld \
--cert-file /etc/apache2/\*.domain.tld/\*.domain.tld.cer \
--key-file /etc/apache2/\*.domain.tld/\*.domain.tld.key \
--fullchain-file /etc/apache2/\*.domain.tld/fullchain.cer \
--reloadcmd "service apache2 force-reload"

./ --install-cert --config-home /root/ -d *.domain.tld -d domain.tld \ 
--cert-file /etc/apache2/\*.domain.tld/\*.domain.tld.cer \ 
--key-file /etc/apache2/\*.domain.tld/\*.domain.tld.key \ 
--fullchain-file /etc/apache2/\*.domain.tld/fullchain.cer \ 
--reloadcmd "service apache2 force-reload"

Now you are ready to proceed to configure your website’s Apache configuration as described in the original instructions (scroll up).

If you have any improvement suggestions or would just like to say thanks you can use the contact form below.

Translation: Tao Te King – Chapter 19. Returning to nature

19. Returning to nature

By leaving their self-righteousness and abandoning their own wisdom people would be greatly improved.

Tao Te Ching, Wang Bi edition, Japan 1770.
Tao Te Ching, Wang Bi edition, Japan 1770. Photo by Miuki and is under Public Domain tagged with {{PD-old}} (copyright holder lifetime + 70 years)

By declining charity and “duty towards kin” they could return to their natural relations.

If skillfulness is abandoned and profit is given up there will be no thieves amongst the people.

Cultivating these three things* has come to nothing and that is why they ought to go back to where they came from.

And you then, dwell in your natural simpleness, hold on to the truth, oppose selfishness and free yourselves of ambition.

*Mr. Pekka Ervast clarifies ‘self-righeousness, “charity” and superiority’

Own translation from 1925 Finnish translation by Pekka Ervast (ISBN 951-8995-01-X) with kind permission of Ruusu-Ristin Kirjallisuusseura ry.

TLS encryption with certificates

Let's Encrypt logo
Let’s Encrypt is a free certification authority kindly provided by Internet Security Research Group (ISRG)

Objectives to the accomplished

  1. First I will be getting and installing a new cert for use on which will host an Etherpad instance to fulfill my secure textual collaboration needs safely.
  2. Second I will be replacing the shortly expiring commercial certificate for * So far I know that I can have the old cert still in place and insert the new certs under a subjectAltName. This way the free social media that I host can continue operating normally (hopefully) without any downtime.

How I did it

The definitive instructions from I found only sometime after starting this were very helpful as they almost always are. recommends using

CertBot logo
CertBot is a free cert management solution provided by The Electronic Frontier Foundation (EFF)

CertBot from Electronic Frontier Foundation to automate the installation of LetsEncrypt certificates so I’m doing that.

CertBot takes as arguments your web server and operating system and provides instructions customized by those. is being served by an Apache2.4 on a Debian8.5 so I chose those.

CertBot points to instructions for enabling backports on my system. Which I promptly followed successfully.

Then you naturally need to

sudo apt-get update

before the backports start to work.

After that

sudo apt-get install python-certbot-apache -t jessie-backport

Runs fine and installs a bunch of python candy

Next I ran

sudo certbot --apache

as instructed by CertBot interactive website. That complained that it did not find any ‘ServerName’s in the configuration files which is slightly strange. When answering ‘no’ to the “Do you want to proceed?” question it exited and hinted to specify domain name with the ‘–domains’ switch

sudo certbot --apache --domains

A blue screen comes up that asks for the “emergency” email address. Put one that you will never lose like an address which I’ve used for over 20 yrs now and which is valid for a lifetime.

Next the blue screen asks if you want to have all traffic redirected to TLS encrypted. I chose to allow normal http too.

Program exits and gives good advice to check the installed cert with the awesome free test tool by SSLLabs so I proceeded to do so. Certbot apparently knows its stuff since the site got an ‘A’ rating for the things SSL.

SSLTest rating A
Parasta A-ryhmää / TLS protection rated A by QUALLABS SSL LAB’s free awesome SSLTEST service

Automating renewal of certificates certificates are valid only for 90 days. Probably due to meticulous planning and execution to maximize security so we want to automate the renewal.

Now CertBot site instructs to test automatic renewal arrangement by issuing command

sudo certbot renew --dry-run

and it reports that everything seems to be in order to automate the renewal so I proceeded to do so with

crontab -e

and inserted instructions to run on quiet the renewal script twice a day 12-hours apart. The command to be run is given as

certbot renew --quiet

But that will fail unless run with sudo because it cannot access certain files so you need to set the cronjob as superuser. Type

sudo su

give password and then run

crontab -e

(See here for practical examples of crontab entry syntax). Exit super user account with ctrl-d and you are done automating the renewal of the certs.

The encrypted URL now leads to the default Apache2 on Debian landing page “It works.. blahblahblah…” so I need to make a new VirtualHost directive for the encrypted site in /etc/apache2/sites-enabled/001-hosts which is where I keep the directives.

So I need to figure where the CertBot put the certificate and the key.

CertBot puts the very secret key and the very public certificate in

‘/etc/letsencrypt/live/domain.tld’ and the automagic from the blue screen creates a VirtualHost entry in ‘/etc/apache2/sites-enabled/000-default-le-ssl.conf’. After I made a normal VirtualHost entry in ‘/etc/apache2/sites-enabled/001-sites.conf’ and commented everything out in the 000-default-le-ssl.conf this blog is now available also in TLS protected

Friendly folks at #freenode pointed out that

sudo apachectl -S

is very useful for locating problem points regarding conflicting VirtualHost directives

Next I am going to figure out if the commenting out stuff from 000-default-le-ssl.conf has any adverse effects. It seems the files with lower prefixed number takes precedence.

Next I try to replicate the necessary steps described in this blog post to actually enable

All that was needed to bring up the default Debian/Apache “it works page” over TLS encrypted https was one run of

sudo certbot --apache --domains

and fix the VirtualHost directive to your liking to actually serve your content.

Getting new certs with nginx.

There doesn’t seem to be quite the same level of automation with Nginx hosted sites than the Apache ones.

sudo certbot certonly --webroot -d --webroot-path /var/www/diaspora

Is what I used to successfully get the new certificates in place.

Installation of GNU MediaGoblin on Debian GNU/Linux

Installing GNU MediaGoblin 0.9.0 with Py3 support and using Postgreql as RDBMS on GNU/Linux system

Installing GNU MediaGoblin (over and over again)

Here I document how to install the GNU MediaGobin 0.9.0, The Three Gobliners on Debian GNU/Linux.

Why over and over again?

Short answer: The installation instructions given are required to be read to be completely understood. So I’ll be installing again a third time.

  • GNU MediaGoblin 0.8.0 I accidentally set to use SQLite, instead of Postgresql the intended database backend. No migration script exists so reinstall was needed
  • GNU MediaGoblin 0.9.0 I managed to install the 0.9.0 using Py2 instead of Py3.
  • GNU MediaGoblin 0.9.0 with python3 version is what I am aiming at the third time around installing
    UPDATE: Seems installation of GNU MediaGoblin 0.9.0 with python3 support is currently impossible if the idea was to use flup and fcgi. Follow this ticket for updates on the situation.

Since the installation using python3 is impossible at the moment I have installed the py2 version instead at using py2, Nginx and fcgi for serving content.

Previously installed on the server are Nginx for webserver with TLS security enabled. Services already running on the server are (Hubzilla), (diaspora), (GNU social) and so some of the dependencies are likely there.

I run into some problems which caused that the Postgresql cluster was not created and started. I got good help from StuckMojo @ #postgresql @ freenode irc.

I fixed the situation by running

  • ‘nano /etc/locale.gen’ and uncommented the Finnish and US English locales
  • ‘sudo locale-gen’ generated the locales according to /etc/locale.gen
  • ‘sudo pg_createcluster –locale=en_US.UTF-8 9.4 main’ creates the cluster and
  • ‘sudo pg_ctlcluster 9.4 main start’ starts the cluster
  • check its status with ‘sudo pg_lsclusters’

Now should be ready to create the database user and the database.


Migration of various free social media from GNU/Linux server to server

Migration procedure for moving various free social media from a GNU/Linux to another GNU/Linux system and end results

Consumium free social medias and Consumerium consumer empowerment effort
Current logo for Consumium free social media services and Consumerium – Enhancing Consumer Informedness – effort

This is the record for what went well and what didn’t go well in the process of migrating the * sites (except, that’s in Espoo)

This migration was completed on 2016-06-09. I would like to extend a warm  you to for showing compassion in my predicament and offering to credit me some of the costs incurred by requiring 2 servers for a period of a time.

<spam>Their operation is really top-notch and I have never had outages with them that I would not have been responsible. Ever since I started hosting free social media with them in July 2013 the service has been outstanding and their control panel does include ability to take snapshots of system disks and a VNC just in case someone is not comfortable working with cli. The first time I saw the VNC in the control panel and it started to show the Debian GNU/Linux white-on-black bootup in my browser I was impressed.. Then it moved to run level 6 and I was naturally like “Whoa! It can do that!”. is maybe not the most inexpensive hosting guys out there at the moment but I tell you their service level and its consistency are worth all the extra money. SSD system disks are spaceous and very fast and just as soon as  starts I’ll be purchasing at least one 10€ unit of 2,000GB big storage (which can be grown to 400,000GB, slightly under 400TB). Scp’ing between 2 servers in the same data center in Netherlands I was able to clock 101 Mbit/s speed. That is almost a gigabit / second, normal HDD couldn’t handle that.</spam>

Debian8 -> Debian8 migration of 4 free social media instances. Debian GNU/Linux, Nginx for web server, MariaDB for RMDBS and ruby, PHP and python as langauges the services run on

Migration of diaspora* to a new server

  • (how to install diaspora* freesome on Debian GNU/Linux)
    diaspora* is the biggest and best known of the free social media. It has innovative features though is somewhat limited due to the creators thinking really hard about protecting the consumer from possible privacy related threats. The software is high quality and reliable. It uses a asymmetric sharing arrangement that is diametrical to twitter’s

    The original raison d’être for the old server called Debian7. The name is not very well chosen and misleading since the machine was dist-upgraded to Debian8 stable without hick-ups. Diaspora* was originally installed in July 2013 which at the time took couple of days

  • Grabbed the database, app/views/home and public/uploads and inserted those into place and the pod looks fine now after the migration.
  • Email was more of an hassle and is covered in a separate paragraph you’ll find down this page.

Migration of GNU social to a new server

  • (how to install GNU social freesome) (How I originally installed GNU social)
    GNU social is a no-nonsense microblogging platform that is simple to grasp. Unfortunately it does not work very at the moment.

    – GNU social is a handy microblogging service.This instance was installed in 2016. Should pose no problems. MySQL was replaced with MariaDB during installation of this with no problems. Update: GNU social migration was the first one to be done. Grabbed the database (which contains the confs) and the ‘avatar’ and ‘files’ directories. Shut down. Put those in place and restart web server and GNU social was up with apparently all the old information from the previous box.

  • If you are getting an Error 400: After the migration the GNU social has been doing the same thing as before.. It often when trying to microblog gives an error “400”. Here one just needs to know to hit ctrl-r, no need to even hit ctrl-a ctrl-c, ctrl-r, ctrl-v as the software preserves what was written into the textbox.

Interesting point about Hubzilla and Friendica

Friendica and Hubzilla leverage the same instructional capital and best-practice which leads to that their installation instructions have many portions in common.

Migration of Hubzilla to a new server

  • – (how to install Hubzilla freesome)
    Hubzilla is a very high quality software and it has always worked just like the label said. It’s use of channels is intuitive as a way of interacting with other people.

    This will probably not have the old database restored because when I originally installed this I didn’t realize the point is to have many many channels but just one login. Of course it might be possible to restore the database but manipulate it so that the Consum(er)ium relevant channels would be under the same user

  • Well I did restore the old database.
  • Pretty much everything that was needed for installation of Hubzilla was already there. Just needed to run ‘sudo aptitude install mcrypt php5-mcrypt’ and installed the Hubzilla, Stopped Nginx and dropped in the database and the user uploads located in /var/www/hubzilla/store and it seems to work fine.

Migration of Friendica to a new server (how to install Friendica free social media)

Friendica is the least learning curve free social media solution for the people escaping Facebook more and more often.

the freesome of least steep learning curve for the people who want to free themselves of Facebook every now and then.


Friendica migration did not require copying over more than just the database as Friendica saves the uploaded files in the database and not flat file system.

Dealing with outgoing and incoming email

Getting email arrangements to work in a safe and reasonable way is by no means as easy as one may think at start. diaspora* email was configured to use SMTP over a TLS encrypted hop over to‘s SMTP server. Took a while to figure out but I am guessing this will make the email look better to spam filters as the “origin” is under the same domain as the machines given in the MX records in DNS to be the Mail eXchange servers for

‘sudo aptitude install sendmail’ installs sendmail, an MTA this is apparently all that is needed for PHP’s mail()-function to work.

The migration plan (and how it went)

(Note: to lazily get all the dependencies and hope there wasn’t old junk you could follow this post

Migration of system settings

  • Update services to latest version so you get the same exact version when you reinstall each service from latest release [✔]
  • Grab TLS key and cert – Remember to keep the key safe [✔] (note: exposing the server.key usually kept in /etc/ssl/private is very dangerous as it will expose all communications encrypted with that key)
  • Grab firewall settings allowing traffic to 22, 80 and 443 [✔] NMAP security scanner is great copyleft free tool for looking at this. tip: ‘nmap localhost’ inside the firewall and ‘nmap the IP address” from outside the firewall will be very useful scans for verifying firewall settings.
  • Grab confs:
  • /etc/nginx/nginx.conf [✔]
  • /etc/nginx/sites-enabled/nginx.conf [✔]
  • Grab home dir [✔]
  • Grab logs [✔]
  • /var/log/nginx/access.log [✔]
  • /var/log/nginx/error.log [✔]
  • Then decided to grab all of /var/log into a .tar.gz, Is only logs, cannot hurt and  [✔]
  • Mass grab /etc and /var/www for later reference when the old server is recycled and resources returned to cloud.
  • Get new server. [✔] Remember to install an ssh server when installing the software or you’ll be unable to access via ssh. Only if hosting guys provide a Virtual Network Console you can fix this problem there
  • Add self to sudoers [✔]
  • Restore home dir contents [✔]
  • Install Nginx [✔]
  • Put logs, key, cert and nginx.conf in place [✔]

Repeat following steps for each service

  • Install dependencies [✔]
  • Install new service clean [✔]
  • NOTIFY USERS THAT NOW IS FEW HOURS OF DATA LOSS IF YOU POST Better idea: When all is ready with the new installation in place and you are thus ready to start the DNS change propagation tell people that the database will be frozen when the old machine is “unreachable” due to the DNS already pointing to the next machine.
  • Grab databases. Each database separately. [✔]
  • Grab user uploaded content and the custom landing page for d* [✔]
  • Insert grabbed database, confs, landing page, user uploaded content. [✔]

Free services that help the SSL/TLS encryption administrator by @mozilla helps you make your #SSL/#TLS stronger.

Mozilla logo used under clauses of Click pic for credits

Check your TLS strength free with @QualSys