Both tests commented that my vocabulary is on the level of “professional white-collar”. I did not cheat in either test by looking up words in Wikipedia or Wiktionary. I’m considering whether to try the French test, but kinda put of by knowing in advance that I will get a dismal result.
Notice that this will fail on the first run but succeed on the second one.
Once the –test finishes successfully you can switch to the production environment by deleting the /root/.acme.sh/*.domain.tld-directory (it contains the staging server’s information and will be regenerated with the production server’s info on next run)
rm -rf /root/.acme.sh/*.domain.tld
Now run the issuing command twice (it will fail on the first run) just changing –test to –force
That’s it. The acme.sh installation added a cronjob to run it daily and it will renew the certificate automatically when it is nearing the end of it’s validity period.
UPDATE 2018-07-16: If you need to use more than one API Key do as follows. This usually occurs when you are hosting sites for many different registrants.
Export the API key if this is the first time you are using that key. If you have already created certificates with this API key the acme.sh will read it from the config file from the file /root/.acme.sh/yourconfigdirectory/account.conf
First I will be getting and installing a new cert for use on pad.byjuho.fi which will host an Etherpad instance to fulfill my secure textual collaboration needs safely.
Second I will be replacing the shortly expiring commercial certificate for *.consumium.org. So far I know that I can have the old cert still in place and insert the new certs under a subjectAltName. This way the free social media that I host can continue operating normally (hopefully) without any downtime.
as instructed by CertBot interactive website. That complained that it did not find any ‘ServerName’s in the configuration files which is slightly strange. When answering ‘no’ to the “Do you want to proceed?” question it exited and hinted to specify domain name with the ‘–domains’ switch
sudo certbot --apache --domains byjuho.fi
A blue screen comes up that asks for the “emergency” email address. Put one that you will never lose like an https://iki.fi address which I’ve used for over 20 yrs now and which is valid for a lifetime.
Next the blue screen asks if you want to have all traffic redirected to TLS encrypted. I chose to allow normal http too.
The encrypted URL now leads to the default Apache2 on Debian landing page “It works.. blahblahblah…” so I need to make a new VirtualHost directive for the encrypted site in /etc/apache2/sites-enabled/001-hosts which is where I keep the directives.
So I need to figure where the CertBot put the certificate and the key.
CertBot puts the very secret key and the very public certificate in
‘/etc/letsencrypt/live/domain.tld’ and the automagic from the blue screen creates a VirtualHost entry in ‘/etc/apache2/sites-enabled/000-default-le-ssl.conf’. After I made a normal VirtualHost entry in ‘/etc/apache2/sites-enabled/001-sites.conf’ and commented everything out in the 000-default-le-ssl.conf this blog is now available also in TLS protected https://ByJuho.fi.
Friendly folks at #freenode pointed out that
sudo apachectl -S
is very useful for locating problem points regarding conflicting VirtualHost directives
Next I am going to figure out if the commenting out stuff from 000-default-le-ssl.conf has any adverse effects. It seems the files with lower prefixed number takes precedence.
Next I try to replicate the necessary steps described in this blog post to actually enable https://pad.byjuho.fi
All that was needed to bring up the default Debian/Apache “it works page” over TLS encrypted https was one run of
sudo certbot --apache --domains pad.byjuho.fi
and fix the VirtualHost directive to your liking to actually serve your content.
Getting new certs with nginx.
There doesn’t seem to be quite the same level of automation with Nginx hosted sites than the Apache ones.
Short answer: The installation instructions given are required to be read to be completely understood. So I’ll be installing again a third time.
GNU MediaGoblin 0.8.0 I accidentally set to use SQLite, instead of Postgresql the intended database backend. No migration script exists so reinstall was needed
GNU MediaGoblin 0.9.0 I managed to install the 0.9.0 using Py2 instead of Py3.
GNU MediaGoblin 0.9.0 with python3 version is what I am aiming at the third time around installing UPDATE: Seems installation of GNU MediaGoblin 0.9.0 with python3 support is currently impossible if the idea was to use flup and fcgi. Follow this ticket for updates on the situation.
Since the installation using python3 is impossible at the moment I have installed the py2 version instead at https://media.consumium.org. using py2, Nginx and fcgi for serving content.
Migration procedure for moving various free social media from a GNU/Linux to another GNU/Linux system and end results
This is the record for what went well and what didn’t go well in the process of migrating the *.consumium.org sites (except https://c.consumium.org, that’s in Espoo)
This migration was completed on 2016-06-09. I would like to extend a warm you to https://TransIP.eu for showing compassion in my predicament and offering to credit me some of the costs incurred by requiring 2 servers for a period of a time.
<spam>Their operation is really top-notch and I have never had outages with them that I would not have been responsible. Ever since I started hosting free social media with them in July 2013 the service has been outstanding and their control panel does include ability to take snapshots of system disks and a VNC just in case someone is not comfortable working with cli. The first time I saw the VNC in the control panel and it started to show the Debian GNU/Linux white-on-black bootup in my browser I was impressed.. Then it moved to run level 6 and I was naturally like “Whoa! It can do that!”. TransIP.eu is maybe not the most inexpensive hosting guys out there at the moment but I tell you their service level and its consistency are worth all the extra money. SSD system disks are spaceous and very fast and just as soon as http://maidsafe.net/ starts I’ll be purchasing at least one 10€ unit of 2,000GB big storage (which can be grown to 400,000GB, slightly under 400TB). Scp’ing between 2 servers in the same data center in Netherlands I was able to clock 101 Mbit/s speed. That is almost a gigabit / second, normal HDD couldn’t handle that.</spam>
The original raison d’être for the old server called Debian7. The name is not very well chosen and misleading since the machine was dist-upgraded to Debian8 stable without hick-ups. Diaspora* was originally installed in July 2013 which at the time took couple of days
Grabbed the database, app/views/home and public/uploads and inserted those into place and the pod looks fine now after the migration.
Email was more of an hassle and is covered in a separate paragraph you’ll find down this page.
– GNU social is a handy microblogging service.This instance was installed in 2016. Should pose no problems. MySQL was replaced with MariaDB during installation of this with no problems. Update: GNU social migration was the first one to be done. Grabbed the database (which contains the confs) and the ‘avatar’ and ‘files’ directories. Shut down. Put those in place and restart web server and GNU social was up with apparently all the old information from the previous box.
If you are getting an Error 400: After the migration the GNU social has been doing the same thing as before.. It often when trying to microblog gives an error “400”. Here one just needs to know to hit ctrl-r, no need to even hit ctrl-a ctrl-c, ctrl-r, ctrl-v as the software preserves what was written into the textbox.
Interesting point about Hubzilla and Friendica
Friendica and Hubzilla leverage the same instructional capital and best-practice which leads to that their installation instructions have many portions in common.
This will probably not have the old database restored because when I originally installed this I didn’t realize the point is to have many many channels but just one login. Of course it might be possible to restore the database but manipulate it so that the Consum(er)ium relevant channels would be under the same user
Well I did restore the old database.
Pretty much everything that was needed for installation of Hubzilla was already there. Just needed to run ‘sudo aptitude install mcrypt php5-mcrypt’ and installed the Hubzilla, Stopped Nginx and dropped in the database and the user uploads located in /var/www/hubzilla/store and it seems to work fine.
the freesome of least steep learning curve for the people who want to free themselves of Facebook every now and then.
Friendica migration did not require copying over more than just the database as Friendica saves the uploaded files in the database and not flat file system.
Dealing with outgoing and incoming email
Getting email arrangements to work in a safe and reasonable way is by no means as easy as one may think at start. diaspora* email was configured to use SMTP over a TLS encrypted hop over to https://gandi.net‘s SMTP server. Took a while to figure out but I am guessing this will make the email look better to spam filters as the “origin” is under the same domain as the machines given in the MX records in DNS to be the Mail eXchange servers for consumium.org
‘sudo aptitude install sendmail’ installs sendmail, an MTA this is apparently all that is needed for PHP’s mail()-function to work.
The migration plan (and how it went)
(Note: to lazily get all the dependencies and hope there wasn’t old junk you could follow this post http://juboblo.gr/index.php/2015/12/02/original-howto-migrate-gnulinux-to-bigger-disk-with-clean-install-and-grab-all-apt-gettable-software-settings-and-files/)
Migration of system settings
Update services to latest version so you get the same exact version when you reinstall each service from latest release [✔]
Grab TLS key and cert – Remember to keep the key safe [✔] (note: exposing the server.key usually kept in /etc/ssl/private is very dangerous as it will expose all communications encrypted with that key)
Grab firewall settings allowing traffic to 22, 80 and 443 [✔] NMAP security scanner is great copyleft free tool for looking at this. tip: ‘nmap localhost’ inside the firewall and ‘nmap the IP address” from outside the firewall will be very useful scans for verifying firewall settings.
Grab home dir [✔]
Grab logs [✔]
Then decided to grab all of /var/log into a .tar.gz, Is only logs, cannot hurt and [✔]
Mass grab /etc and /var/www for later reference when the old server is recycled and resources returned to cloud.
Get new server. [✔] Remember to install an ssh server when installing the software or you’ll be unable to access via ssh. Only if hosting guys provide a Virtual Network Console you can fix this problem there
Add self to sudoers [✔]
Restore home dir contents [✔]
Install Nginx [✔]
Put logs, key, cert and nginx.conf in place [✔]
Repeat following steps for each service
Install dependencies [✔]
Install new service clean [✔]
NOTIFY USERS THAT NOW IS FEW HOURS OF DATA LOSS IF YOU POST Better idea: When all is ready with the new installation in place and you are thus ready to start the DNS change propagation tell people that the database will be frozen when the old machine is “unreachable” due to the DNS already pointing to the next machine.
Grab databases. Each database separately. [✔]
Grab user uploaded content and the custom landing page for d* [✔]
Insert grabbed database, confs, landing page, user uploaded content. [✔]