Upgrading Debian GNU/Linux from Jessie to Stretch

The official Debian GNU/Linux logo
Debian is a very reliable OS for servers, though you can install it on desktops too.

Objective: To safely upgrade from Debian 8 (Jessie) to Debian 9 (Stretch) two servers and to keep good records of what was done to perform the upgrade.

  1. Server #1 is used just to verify and hold backups of other servers so it has no public services running.
  2. Server #2 hosts a diaspora*, a Hubzilla, a GNU social, a Friendica and a GNU Mediagoblin.

The definitive source on how to achieve the objectives

There are various tutorials with very brief instructions on how to go about the upgrade, but I decided to follow  The definitive guide to upgrading from Debian Jessie (8) to Debian Stretch (9) at Debian.org to learn how to do the upgrade very carefully.

Preparation

First off: I informed the users of the free social media instances about the upcoming upgrade and the downtime to be expected.

Make sure all the software are in their latest version

# apt update && apt upgrade

Made backups of /etc, /var/lib/dpkg, /var/lib/apt/extended_states /home/username, /var/www and the output of dpkg --get-selections "*" and stored them off-site. Additionally I took a snapshot of the system disk just in case the upgrade doesn’t go well then it is possible to revert to the pre-upgrade situation.

Next checked for non-Jessie software with

$ apt-forktracer | sort

It found some items from jessie-backports but nothing that is in use.

Checked for half-installed packages with

# dpkg --audit

Nothing of interest was found. Just one dummy package.

Check for packages on hold

# dpkg --get-selections | grep 'hold$'

None were found.

Edit the /etc/apt/sources.list

Now update the /etc/apt/sources.list changing each occurrence of ‘jessie’ with ‘stretch’. I did it with sed (Stream EDitor) but it is also possible to manually edit the file with your favourite editor.

# sed -i 's/jessie/stretch/g' /etc/apt/sources.list

Start session recording for later reference

Next start session recording with (replace step with a number. When needing to reboot then restart the session recording with an incremented number)

# script -t 2>~/upgrade-stretchstep.time -a ~/upgrade-stretchstep.script

If you have used the -t switch for script you can use the scriptreplay program to replay the whole session:

# scriptreplay ~/upgrade-stretch1.time ~/upgrade-stretch1.script

The upgrade

Update the package list with the Stretch sources in place

# apt-get update

Make sure you have enough disk space for the upgrade

# apt-get -o APT::Get::Trivial-Only=true dist-upgrade

There is ample of space so proceed with minimal upgrade (upgrading only the installed software).

# apt-get upgrade

Now time to upgrade the system. This will take a while.

# apt-get dist-upgrade

Next check if you have already installed the linux-image* meta-package

# dpkg -l "linux-image*" | grep ^ii | grep -i meta

If you do not see any output, then you will either need to install a new linux-image package by hand or install a linux-image metapackage. To see a list of available linux-image metapackages, run:

# apt-cache search linux-image- | grep -i meta | grep -v transition

If unsure which linux-image metapackage you can get longer description of the linux-image in question by running

# apt-cache show linux-image-amd64

Looks good. Let’s install it.

# apt-get install linux-image-amd64

apt-get reports that there are installed packages that are no longer needed. Remove them with

# apt-get autoremove

Now it is time to reboot for the new kernel to take effect.

# reboot

Login and check the OS version

$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 9.5 (stretch)
Release:        9.5
Codename:       stretch

Verified that the services were up-and-running. Upgrade successful!


UPDATE: It seems there are 2 PostgreSQL running: Version 9.4 (the old one from Jessie) and Version 9.6, which ships with Stretch. Going to look into removing the old one safely.

Installing a LetsEncrypt.org wildcard certificate on Linux using acme.sh and a DNS Api

Let’s Encrypt is a certificate authority (CA) that offers free SSL/TLS certificates

Objective: To acquire and install a wildcard SSL/TLS certificate from LetsEncrypt.org to a GNU/Linux system with automatic renewal enabled by using a registrar’s DNS API to prove the ownership of the domain. In this case I’m using the Gandi LiveDNS API but the instructions work with other DNS providers with APIs too that have acme.sh DNS plugins available.

Install acme.sh

sudo su
git clone https://github.com/Neilpang/acme.sh.git
cd ./acme.sh
./acme.sh --install

Get API key from Gandi

Go to https://account.gandi.net/ and click on “security” and generate an API key and store it in a safe place and export it with

export GANDI_LIVEDNS_KEY="fdmlfsdklmfdkmqsdfkthiskeyisofcoursefake"

Generate the cert

Followed the official acme.sh DNS API instructions at GitHub.

Now use the staging environment (–test) for the certificate issuing. This will save you on the issuing limits of LetsEncrypt.org production platform.

acme.sh --issue --test --log --dns dns_gandi_livedns --log -d *.domain.tld -d domain.tld

Notice that this will fail on the first run but succeed on the second one.

Once the –test finishes successfully you can switch to the production environment by deleting the /root/.acme.sh/*.domain.tld-directory (it contains the staging server’s information and will be regenerated with the production server’s info on next run)

rm -rf /root/.acme.sh/*.domain.tld

Now run the issuing command twice (it will fail on the first run) just changing –test to –force

acme.sh --issue --force --log --dns dns_gandi_livedns --log -d *.domain.tld -d domain.tld

Install the certificate in some sensible place as the directory structure of /root/.acme.sh may change in the future.

Certificate deployment instructions for Apache at acme.sh GitHub

acme.sh --install-cert -d *.domain.tld -d domain.tld \
--cert-file /etc/apache2/acme.sh/*.domain.tld/*.domain.tld.cer \
--key-file /etc/apache2/acme.sh/*.domain.tld/*.domain.tld.key \
--fullchain-file /etc/apache2/acme.sh/*.domain.tld/fullchain.cer \
--reloadcmd "service apache2 force-reload"

Edit Apache configuration to take the SSL/TLS protected site into use

Create a VirtualHost-directive for the SSL/TLS protected site

<VirtualHost *:443>
...
   SSLEngine on
 SSLCertificateFile /etc/apache2/acme.sh/*.domain.tld/*.domain.tld.cer
   SSLCertificateKeyFile /etc/apache2/acme.sh/*.domain.tld/*.domain.tld.key
  SSLCACertificateFile /etc/apache2/acme.sh/*.domain.tld/fullchain.cer
</VirtualHost>


Once you are sure that the HTTPS site works redirect requests from the http-site to the HTTPS site with URL rewriting.

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1 [R,L]

Enable forward secrecy in your Apache configuration

Enabling forward secrecy makes users of the site more secure. Instructions by SSLLabs research here at GitHub.


That’s it. The acme.sh installation added a cronjob to run it daily and it will renew the certificate automatically when it is nearing the end of it’s validity period.


 

UPDATE 2018-07-16: If you need to use more than one API Key do as follows. This usually occurs when you are hosting sites for many different registrants.

Export the API key if this is the first time you are using that key. If you have already created certificates with this API key the acme.sh will read it from the config file from the file /root/.acme.sh/yourconfigdirectory/account.conf

acme.sh --issue --config-home /root/.acme.sh/yourconfigdirectory --log --dns dns_gandi_livedns --log -d *.domain.tld -d domain.tld

First run will fail. Run it again.

Create the target directory for certificate installation.

mkdir /etc/apache2/acme.sh/yourconfigdirectory/\*.domain.tld

Now install the certificate

./acme.sh --install-cert --config-home /root/.acme.sh/yourconfigdirectory -d *.domain.tld -d domain.tld \
--cert-file /etc/apache2/acme.sh/yourconfigdirectory/\*.domain.tld/\*.domain.tld.cer \
--key-file /etc/apache2/acme.sh/yourconfigdirectory/\*.domain.tld/\*.domain.tld.key \
--fullchain-file /etc/apache2/acme.sh/yourconfigdirectory/\*.domain.tld/fullchain.cer \
--reloadcmd "service apache2 force-reload"

Now you are ready to proceed to configure your website’s Apache configuration as described in the original instructions (scroll up).


If you have any improvement suggestions or would just like to say thanks you can use the contact form below.

Migrating a Mediawiki to a new Linux server

To get you started: Get a Linux VPS.

I chose Tavu.io, an ecohosting company with the data center deep inside the Finnish granite bedrock, a “renewable electricity only”-policy and a cloud infrastructure built on top of OpenStack.

For OS I chose latest Debian Stable which was version 9 at the time of writing.

Mediawiki's logo
Mediawiki is a wiki system of awesome quality and reliability

Tavu.io created the VPS on their OpenStack based cloud in a matter of few tens of seconds.

Then the system gave a temporary password and on login via ssh the system required a new password was set.

Login with the new password and run

sudo apt update && sudo apt upgrade

This will get the pre-installed software to their latest version numbers and may take a while.

Then I added a user name I usually use on Linux systems by entering:

sudo useradd -m -s /bin/bash <username>

Now set a password for username with

sudo passwd <username>

and add the user to /etc/sudoers (make a copy of the line that says “root” and change ‘root’ to your user name of choice) and log out and log in as the newly created user.

Now is a good time to get an firewall going so do so.

Now grab a list of installed packages with

dpkg --get-selections > packages-YYYY-MM-DD.list

This could be useful for later use.

Now install some software

sudo apt install tmux nmap apache2 lynx
  1. tmux is a shell session multiplexer
  2. nmap is a port scanner
  3. apache2 is a web server
  4. lynx is a terminal-based web browser

And some more software

sudo apt install htop atop itop iotop glances chkrootkit
  1. htop, atop, itop and glances are system monitoring (for humans)
  2. iotop is a IO (Input/Output) monitoring system for humans (requires sudo)
  3. chkrootkit is a software for checking if your system has a known rootkit installed (bad for you)

Migrating the Mediawiki

First install the dependencies as described below and only then we switch to following the moving a wiki guide. which consists of actually three operations:

  1. Making a backup of the Mediawiki on the old server
  2. Moving the backup to the new machine
  3. Restoring Mediawiki from the backup.

Installing Mediawiki’s dependencies

Now we move on to installing the dependencies of Mediawiki. For this we will follow the Mediawiki installation guide for Debian and Ubuntu (generic guide here) up-to-the-point of actually installing one.

sudo apt-get install apache2 default-mysql-server default-mysql-client php php-mysql libapache2-mod-php php-xml php-mbstring

We installed MariaDB instead of MySQL. They are binary compatible so you can choose one or the other and also interchange them afterwards. Add the database and database user of your wiki and grant all rights on the database to the database user.

Those are the mandatory components and next up are the beneficial components out of which we chose the following

sudo apt-get install php-apcu php-intl imagemagick php-cli

Move required files and the database to new machine

If possible make sure that your Mediawiki is the latest version on the old server.

Next I packed and moved

  1. The database
  2. The Mediawiki directory /var/www/mediawiki
  3. The Mediawiki logs from /var/log/sites/mediawiki
  4. site configuration from /etc/apache/sites-available

and expanded them into the right place on the new server.

A sane approach to the Mediawiki files ownership is as follows

First recursively make you the owner of all of the Mediawiki directory and its subdirectories and files with

sudo chown -R <username> /var/www/mediawiki

and then explicitly making the images/-directory, where Mediawiki stores its writables, to be posession of user www-data (www-data is the user that Apache and Mediawiki run as) by

sudo chown -R www-data /var/www/mediawiki/images

Minimize downtime

The TTL (Time To Live) of the domain at the DNS also naturally affects the length of the outage so modifying it to very short time such as 15 minutes way in advance of commencing the migration.

I temporarily modified the domain name of the Mediawiki (in /etc/apache2/sites-available and also LocalSettings.php) to a temporary subdomain to test that the Mediawiki is working on the new server before doing the DNS change of the production Mediawiki. After you have viewed that the wiki is working on the new server change the domains back to the “real” one.

Following these two practices are simple practical things to do that help to make the imminent outage of your service as short as possible.


Configure Apache2

Link the .conf files with symbolic links from /etc/apache2/sites-available to /etc/apache2/sites-enabled.

ln -s ../sites-available/example.com example.com

Enable mod_rewrite which is needed for the pretty URLs to work.

sudo a2enmod rewrite

Test your Apache2 configuration with

sudo apachectl configtest

and fix your config untill the configuration says ‘Syntax OK’

The last step is that we need to make the Apache2 reload its configuration which is accomplished with

sudo service apache2 reload

Now navigate to the temporary subdomain’s /wiki/-directory and you should see your wiki there.

Warning: The Mediawiki extensions may have dependencies that are not satisfied so also check that each extension works.


If using reCAPTCHA

Google’s reCAPTCHA stopped working (CAPTCHA shows up but when it is time to approve the human as a human I got an error message that reCAPTCHA “cannot contact server”.

This seemed to be solved by logging in to the CAPTCHA management page at Google and deleting the old keys and generating new keys and naturally changing the keys to the new ones at Mediawiki’s LocalSettings.php


Important: Enable outgoing email for Mediawiki

Now we need to put in place a way for the Mediawiki to send emails (very important).

My registrar Gandi.net provides a mailing system which enables the one to use $wgSMTP (set this in LocalSettings) to send outgoing mailing. They also have 5 mailboxes and 1000 forwards included for each domain for all registrants so I can confidently use …@consumerium.org addresses since Gandi.net is rock-solid operation with a very wide palette of TLD’s though maybe 20% higher prices than the price leader which is often buggy, slow and unreliable if they just compete with the “cheapest on planet”.

Other method to get email to go outwards is to install a MTA (Mail Transfer Agent) such as Sendmail, Postfix or Nullmailer and configure it to send the messages.

Whichever method you chose to enable email do check that it works!

Happy wikiditing! – Juho

Kansalaisaloiteluonnos: Salamallintamisen kieltäminen

Kansalaisaloiteluonnos salamallintamisen kieltämiseksi

Salamikä?!??!

00-luvun alkupuolelta lähtien on tullut (lähestulkoon) mahdottomaksi erottaa liikkuvissa ja liikkumattomissa kuvissa mikä on ihmisen kuva joka on kuvattu (elokuva)kameralla ja mikä taas on simulaatio/mallinne ihmisen kuvasta joka on kuvattu simulaatiolla kamerasta.

Kun kameraa ei ole olemassa mutta simulaatiolla kuvattava kohde näyttää erehdyttävästi ihmismuotoiselta on kyseessä digitaalinen kaksoisolento (digital look-alike).

Nyt sama on käymässä meidän äänillemme eli ne kyetään varastamaan esim. Syksyllä 2016 esitellyillä prototyypeillä Adobe Voco:lla tai DeepMind WaveNet:illä ja laittaa sanomaan mitä tahansa. Kun ihmiskokein ei pystytä enään eroittamaan mikä on ihmisen ääni ja mikä simulaatio ihmisäänestä on kyseessä digitaalinen kaksoisääni (digital sound-alike)

On siis aika toimia ja kieltää salamallinnus.

9 images showing various techniques on a model derived interactively from a single photo

Kuva : Muovattavan mallin sovittaminen yhteen ainoaan kuvaan (1) tuottaa 3-D approksiaation (2) ja tekstuurikaappauksen (4) 3-D malli renderoidaan takaisin kuvaan lihonneena (3) laihtuneena (5) ilmeensä nyrpistäneenä (6) ja pakoitettuna hymyilemään (7)
Kuva 1. Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311566 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Splitting reflected light to diffuse and specular components was not complicated
Original image Copyright ACM 2000 – http://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Elokuvasaleissa olemme voineet nähdä digitaalisia kaksoisolentoja jo reilusti yli kymmenen vuotta. Näillä digitaalisilla kaksoisolennoilla on “vaatteita” (simulaatio vaatetuksesta ei ole vaatetusta) tai “supersankariasuja” sekä “superpahisasuja”, mutta valitettavasti organisoituneet rikollisliigat joilla on tämä asekyky käytössään levittävät sameassa internetissä tuhoisin seurauksin alastomia digitaalisia kaksoisolentoja ja epäluonnollisesti niiden välisiä “fyysisiä” vuorovaikutustilanteita. Nämä teollisesti tuotettavat harhat aiheuttavat inhimillistä ja yhteiskunnallista kärsimystä ja niiden kiellettävissä olevat osaset tulisi kieltää lailla kansalaisten suojelemiseksi rikollisliigojen mielivallalta.

Anekdotaalisesti voidaan todeta: “Luulet(te)ko että tuo oli Hugo Weavingin vasen poskiluu jonka Keanu Reeves löi sisään oikealla nyrkillään?” (Video linkitetty alempana, skrollatkaa)

Katsaus nykyiseen asiaa sivuavaan lainsäädäntöön

Suomen lain Rikoslain 24 luku Yksityisyyden, rauhan ja kunnian loukkaamisesta sisältää salamallintamisen tuhovaikutuksia sivuavia pykäliä muttei valitettavasti anna ongelman ratkomiseen virkavallalle tarpeellisia välineitä.

  • § 6 Salakatselu
  • § 7 Salakuuntelun ja salakatselun valmistelu
  • § 8 Yksityiselämää loukkaava tiedon levittäminen
  • § a 8 Törkeä yksityiselämää loukkaava tiedon välittäminen
  • § 9 Kunnianloukkaus
  • § 10 Törkeä kunnianloukkaus

Lakiehdotus salamallintamisen kieltämiseksi

§1 Ulkonäön salamallintaminen.

Ihmisen kolmiulotteisen mallin sekä tälläisestä muodostetun 7-ulotteisen¹ kaksisuuntaisen heijastavuusjakaumafunktiomallin tai vastaavan mutta teknisesti erilaisen mallin salaa hankkiminen eli ulkonäön salamallinnus sekä tälläisen mallin hallussapito, ostaminen, myyminen, luovuttaminen, maahantuonti ja -vienti ilman kohdeihmisen suostumusta on rangaistavaa.

§2 Salamallien projisoinnista ja näiden saataville tuottamisesta.

Pykälän yksi määrittämistä salamalleista projisointi sekä liikkumattomiksi että animoiduiksi 2-ulotteisiksi kuviksi tai stereokuviksi² sekä tälläisten saataville tuottaminen on rangaistavaa.³

§3 Ihmisäänen salamallintaminen.

Sellaisen ihmisen äänen mallin⁴, joka erehdyttävästi muistuttaa ihmisen ääntä, hankkiminen, hallussapito, ostaminen, myyminen, luovuttaminen, maahantuonti ja -vienti ilman kohdehenkilön lupaa on rangaistavaa.

§4 Ihmisäänen salamallin käytöstä.

Ihmisäänen salamallin pohjalta luodun audiomateriaalin generointi ja saataville tuottaminen on rangaistavaa.


  1. Seitsemän ulottuvuutta ovat seuraavasti: 3 karteesista X,Y,Z sekä 2 valon tulokulmalle ja 2 lähtökulmalle. Enlanniksi termi on “Bidirectional reflectance distribution function”, BRDF. Lisätietoa tutkielmassani joka linkitetty kohdassa “lisälukemista”. Kanavia on kolme R, G ja B joilla voidaan muodostaa teoriassa kaikki mahdolliset valot.
  2. Elokuvateatterikielessä ns. “3-D”. Todellisuudessa ulottuvuus lienee ilmaistava vain 2 kpl 2-ulotteisia tasoja.
  3. Hallussapitäjiä kehoitettaisiin lähinnä ottamaan apua vastaan eikä kriminalisoida.
  4. Kts. esim Adobe Voco ja DeepMind WaveNet. Ei vielä julkisesti kuunneltavana.

Pohjimmaiset saatesanat

Se saattaa olla seitsemänulotteinen ja me ollaan suunnilleen neljässä ulottuvuudessa mutta ne ei pääse siitä 2-D projektiosta ulos jos ei me päästetä sitä enään. Onko täällä kenelläkään mitään poliittista tahtoa yrittää tehdä tälle mitään?


Katseltavaa (vakavasti)

Katseltavaa (enemmän viihteellä)

Lisälukemista


Tämä blogipostaus on Heinäkuun päivitys KANSALAISALOITTEEN laatimista varten. Kun kansalaisaloite on syötetty järjestelmään ei sen sisältöä voi lain mukaan enää muokata joten aloitteen saaminen mahdollisimman hyväsisältöiseksi on elintärkeää ihmiskunnan suojelemiseksi ihmiskunnan pahuudeltaltaan ja tietämättämyydeltään. Kun aloite on jätetty on aloitteen tekijöillä (varaedustaja tarvitaan, frendit voi ilmoittautua vapaaehtoisiksi) on kuusi kuukautta aikaa kerätä ne muut 49 998 allekirjoitusta jolloin eduskunnan on PAKKO käsitellä aloitetta. Mikäli tätä määrää ei saavuteta on kuinkin aloitteentekeminen askel siihen suuntaan että tämä nykyajan teollinen saasta saadaan päivänvaloon ja yhteiskunnalliseen, sekä toivottavasti lainsäädännölliseen keskusteluun mukaan. Aktivoidu ja levitä sanaa.

Kansalaisaloiteluonnos: Digitaalisten kaksoisolentojen edellytysten luvaton hankkiminen, hallussapito, kauppa ja maahantuonti, ja -vienti kiellettävä lailla

This version is obsoleted by / Tämän version korvaa https://byjuho.fi/2016/12/21/kansalaisaloiteluonnos-salamallintamisen-kieltaminen/

 

Kansalaisaloiteluonnos: Digitaalisten kaksoisolentojen tekemisen edellytysten luvaton hankkiminen, hallussapito, kauppa, maahantuonti ja -vienti kiellettävä lailla.

Johdanto

Noin vuodesta 2003 lähtien on tullut (lähestulkoon) mahdottomaksi erottaa liikkuvissa ja liikkumattomissa kuvissa mikä on ihmisen kuva joka on kuvattu (elokuva)kameralla ja mikä taas on simulaatio ihmisen kuvasta joka on kuvattu simulaatiolla kamerasta. Kun kameraa ei ole olemassa mutta simulaatiolla kuvattava kohde näyttää erehdyttävästi ihmismuotoiselta on kyseessä digitaalinen kaksoisolento (digital look-alike).

Elokuvasaleissa olemme voineet nähdä digitaalisia kaksoisolentoja jo reilusti yli kymmenen vuotta. Näillä digitaalisilla kaksoisolennoilla on “vaatteita” (simulaatio vaatetuksesta ei ole vaatetusta) tai “supersankariasuja” sekä “superpahisasuja”, mutta valitettavasti organisoituneet rikollisliigat joilla on tämä asekyky käytössään levittävät sameassa internetissä tuhoisin seurauksin alastomia digitaalisia kaksoisolentoja ja epäluonnollisesti niiden välisiä “fyysisiä” vuorovaikutustilanteita. Nämä teollisesti tuotettavat harhat aiheuttavat inhimillistä ja yhteiskunnallista kärsimystä ja niiden kiellettävissä olevat osaset tulisi kieltää lailla kansalaisten suojelemiseksi rikollisliigojen mielivallalta.

Anekdotaalisesti voidaan todeta: “Luulet(te)ko että tuo oli Hugo Weavingin vasen poskiluu jonka Keanu Reeves löi sisään oikealla nyrkillään?” (Video linkitetty alempana, skrollatkaa)

Tämä blogipostaus on luonnos KANSALAISALOITTEEN laatimista varten. Kun kansalaisaloite on syötetty järjestelmään ei sen sisältöä voi lain mukaan enää muokata joten aloitteen saaminen mahdollisimman hyväsisältöiseksi on elintärkeää ihmiskunnan suojelemiseksi ihmiskunnan pahuudeltaltaan ja tietämättämyydeltään. Kun aloite on jätetty on aloitteen tekijöillä (varaedustaja tarvitaan, frendit voi ilmoittautua vapaaehtoisiksi) on kuusi kuukautta aikaa kerätä ne muut 49 998 allekirjoitusta jolloin eduskunnan on PAKKO käsitellä aloitetta. Mikäli tätä määrää ei saavuteta on kuinkin aloitteentekeminen askel siihen suuntaan että tämä nykyajan teollinen saasta saadaan päivänvaloon ja yhteiskunnalliseen, sekä toivottavasti lainsäädännölliseen keskusteluun mukaan. Aktivoidu ja levitä sanaa. Se saattaa olla kahdeksanulotteinen ja me ollaan suunnilleen neljässä ulottuvuudessa mutta ne ei pääse siitä 2-D projektiosta ulos jos ei me päästetä sitä enään.

Jopa yhdestä kasvokuvasta voidaan kaapata 3-D geometria interaktiivisella ihmisavustuksella muovaamalla mallia kunnes kuva täsmää siihen. Totta kai usein tekijän silmät ovat nähneet valtavan määrän kuvia kohteesta. (kts. kuva 1)

9 images showing various techniques on a model derived interactively from a single photo
Kuva 1: Muovattavan mallin sovittaminen yhteen ainoaan kuvaan (1) tuottaa 3-D approksiaation (2) ja tekstuurikaappauksen (4) 3-D malli renderoidaan takaisin kuvaan lihonneena (3) laihtuneena (5) ilmeensä nyrpistäneenä (6) ja pakoitettuna hymyilemään (7)
Image 1. Low resolution rip because time and ability restrictions. Original image Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311566 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Katsaus nykyiseen asiaa sivuavaan lainsäädäntöön

Suomen lain Rikoslain 24 luku Yksityisyyden, rauhan ja kunnian loukkaamisesta sisältää asiaa sivuavia muttei ongelman ratkomiseen virkavallalle tarpeellisia välineitä seuraavasti.

  • § 6 Salakatselu
  • § 7 Salakuuntelun ja salakatselun valmistelu
  • § 8 Yksityiselämää loukkaava tiedon levittäminen
  • § a 8 Törkeä yksityiselämää loukkaava tiedon välittäminen
  • § 9 Kunnianloukkaus
  • § 10 Törkeä kunnianloukkaus

Lakiehdotus:

§1 Ulkonäön salamallintaminen.

Ihmisen kolmiulotteisen mallin sekä tälläisestä muodostetun 7-ulotteisen¹ kaksisuuntaisen heijastavuusjakaumafunktiomallin salaa hankkiminen eli salamallinnus sekä tälläisen mallin hallussapito, ostaminen, myyminen, luovuttaminen, maahantuonti ja -vienti ilman kohdeihmisen suostumusta on rangaistavaa.

§2 Salamallien projisoinnista ja näiden saataville tuottamisesta.

Pykälän yksi 7-ulotteisesta tai tälläisestä muodostetusta liikkuvasta eli 8-ulotteisesta² mallista projisointi sekä liikkumattomiksi että animoiduiksi 2-ulotteiseksi kuviksi tai stereokuviksi³ sekä tälläisten saataville tuottaminen on rangaistavaa.⁴


  1. Seitsemän ulottuvuutta ovat seuraavasti: 3 karteesista X,Y,Z sekä 2 valon tulokulmalle ja 2 lähtökulmalle. Enlanniksi termi on “Bidirectional reflectance distribution function”, BRDF. Lisätietoa tutkielmassani joka linkitetty kohdassa “lisälukemista”. Kanavia on kolme R, G ja B joilla voidaan muodostaa teoriassa kaikki mahdolliset valot.
  2. Aika on kahdeksas ulottuvuus joka ilmenee ihmissilmille pikseleiden liikkeenä ja muutoksina.
  3. Elokuvateatterikielessä ns. “3-D”. Todellisuudessa ulottuvuus lienee ilmaistava vain 2 kpl 2-ulotteisia tasoja.
  4. Hallussapitäjiä kehoitettaisiin lähinnä ottamaan apua vastaan eikä kriminalisoida.

Yhteydenotot

Lisätietoja, parannusehdotuksia yms. otetaan mielellään vastaan. Lisätietopyyntöihin vastataan todennäköisimmin tarjouskirjeellä. Jollain maksuvälineellä ne ostokset pitää maksaa.

Lohjalla, maanantaina 2016-11-21, aloitteentekijä, kansalainen Juho Kunsola

Päivitetty ti 2016-11-22 lukuisia kertoja

Päivitetty ke 2016-11-23 lukuisia kertoja

Päivitetty to 2016-11-24 hieman

Päivitetty ke 2016-12-21, lisäämällä asiaa sivuavia lainkohtia ja selkeyttämällä lakiehdotustekstiä.


Lisälukemista

Katseltavaa

Protecting GNU MediaGoblin

GNU MediaGoblin
GNU MediaGoblin is sympa but under attack.

Objective: GNU MediaGoblin instances that have open registrations are suffering from botnets registering accounts en masse for spamming purposes and thus forcing instance maintainers to close registrations. Especially annoying thing the botnets are doing is that they do not even check if the email address lists they traded something in exchange for are valid causing massive amounts of mail returned by Mail Delivery Subsystem on the basis that the email box does not exist. Teslas_moustache on freenode irc proposed that we should look into how Fail2ban could be utilized to stop known vandals.

Fail2ban logo
Fail2ban dynamically alters firewall settings to counter vandal activity by denying access to known vandal IPs. Logo used under the clauses CC-BY-SA 3.0 courtesy of WMC user Palosirkka.

Fail2ban wiki on Fail2ban

“Fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs — too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the IP addresses for a specified amount of time, although any arbitrary other action (e.g. sending an email) could also be configured. Out of the box Fail2Ban comes with filters for various services (apache, courier, ssh, etc).”

How to use Fail2ban

When properly configured Fail2ban dynamically modifies the iptables rules when it sees improper behavior.

  • Any IP addresses that can be associated with generating flood of returned mail because they try to register an account with an email address that doesn’t exist should be banned. Stupid, annoying and basic FUD technique employed to discourage MediaGoblin people.


Sharing information on vandal IP and email addresses

Also the issue has been raised that instead of lying down as the firing from the FUD campaign botnets ensues we should try to take their ground. For this it would be beneficial to form a data sharing arrangement between GMG hosters so that we can more effectively combat the FUD campaign.

Installation of Etherpad

Etherpad logo
Etherpad is a free system for collaborative editing of text documents well suited to both working in parallel and serially. It is provided courtesy of the Etherpad Foundation and the developers

Objective: Install a private instance of Etherpad.org secured with TLS encryption and configuring the system to have good level of controll over who gets to see and edit what i.e. to authenticate the users.

Instructions used:


Basic install

Install the dependencies

sudo apt-get install gzip git curl python libssl-dev pkg-config build-essential

You will also need to download and install a working node.js system. The installation manual does recommend against using the version that apt-get installs and go for the downloadable one.

The official Node.js installation guide gives the following instructions for a Debian8:

curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -

followed by

sudo apt-get install -y nodejs

Which worked just fine installing the nodejs from deb.nodesource.com

Next create the directory where you want Etherpad to reside and git clone into the source tree

git clone git://github.com/ether/etherpad-lite.git

and change directory to there and run

bin/run.sh

and

lynx http://127.0.0.1:9001

and you should see your Etherpad installation.

TLS encryption with LetEncrypt.org certificates

Let's Encrypt logo
Let’s Encrypt is a free certification authority kindly provided by Internet Security Research Group (ISRG)

Objectives to the accomplished

  1. First I will be getting and installing a new cert for use on pad.byjuho.fi which will host an Etherpad instance to fulfill my secure textual collaboration needs safely.
  2. Second I will be replacing the shortly expiring commercial certificate for *.consumium.org. So far I know that I can have the old cert still in place and insert the new certs under a subjectAltName. This way the free social media that I host can continue operating normally (hopefully) without any downtime.

How I did it

The definitive instructions from readthedocs.io I found only sometime after starting this were very helpful as they almost always are.

https://letsencrypt.org/getting-started/ recommends using

CertBot logo
CertBot is a free cert management solution provided by The Electronic Frontier Foundation (EFF)

CertBot from Electronic Frontier Foundation to automate the installation of LetsEncrypt certificates so I’m doing that.

CertBot takes as arguments your web server and operating system and provides instructions customized by those.

ByJuho.fi is being served by an Apache2.4 on a Debian8.5 so I chose those.

CertBot points to instructions for enabling backports on my system. Which I promptly followed successfully.

Then you naturally need to

sudo apt-get update

before the backports start to work.

After that

sudo apt-get install python-certbot-apache -t jessie-backport

Runs fine and installs a bunch of python candy

Next I ran

sudo certbot --apache

as instructed by CertBot interactive website. That complained that it did not find any ‘ServerName’s in the configuration files which is slightly strange. When answering ‘no’ to the “Do you want to proceed?” question it exited and hinted to specify domain name with the ‘–domains’ switch

sudo certbot --apache --domains byjuho.fi

A blue screen comes up that asks for the “emergency” email address. Put one that you will never lose like an https://iki.fi address which I’ve used for over 20 yrs now and which is valid for a lifetime.

Next the blue screen asks if you want to have all traffic redirected to TLS encrypted. I chose to allow normal http too.

Program exits and gives good advice to check the installed cert with the awesome free test tool by SSLLabs so I proceeded to do so. Certbot apparently knows its stuff since the site got an ‘A’ rating for the things SSL.

SSLTest rating A
Parasta A-ryhmää / TLS protection rated A by QUALLABS SSL LAB’s free awesome SSLTEST service

Automating renewal of certificates

LetsEncrypt.org certificates are valid only for 90 days. Probably due to meticulous planning and execution to maximize security so we want to automate the renewal.

Now CertBot site instructs to test automatic renewal arrangement by issuing command

sudo certbot renew --dry-run

and it reports that everything seems to be in order to automate the renewal so I proceeded to do so with

crontab -e

and inserted instructions to run on quiet the renewal script twice a day 12-hours apart. The command to be run is given as

certbot renew --quiet

But that will fail unless run with sudo because it cannot access certain files so you need to set the cronjob as superuser. Type

sudo su

give password and then run

crontab -e

(See here for practical examples of crontab entry syntax). Exit super user account with ctrl-d and you are done automating the renewal of the certs.

The encrypted URL now leads to the default Apache2 on Debian landing page “It works.. blahblahblah…” so I need to make a new VirtualHost directive for the encrypted site in /etc/apache2/sites-enabled/001-hosts which is where I keep the directives.

So I need to figure where the CertBot put the certificate and the key.

CertBot puts the very secret key and the very public certificate in

‘/etc/letsencrypt/live/domain.tld’ and the automagic from the blue screen creates a VirtualHost entry in ‘/etc/apache2/sites-enabled/000-default-le-ssl.conf’. After I made a normal VirtualHost entry in ‘/etc/apache2/sites-enabled/001-sites.conf’ and commented everything out in the 000-default-le-ssl.conf this blog is now available also in TLS protected https://ByJuho.fi.

Friendly folks at #freenode pointed out that

sudo apachectl -S

is very useful for locating problem points regarding conflicting VirtualHost directives

Next I am going to figure out if the commenting out stuff from 000-default-le-ssl.conf has any adverse effects. It seems the files with lower prefixed number takes precedence.


Next I try to replicate the necessary steps described in this blog post to actually enable https://pad.byjuho.fi

All that was needed to bring up the default Debian/Apache “it works page” over TLS encrypted https was one run of

sudo certbot --apache --domains pad.byjuho.fi

and fix the VirtualHost directive to your liking to actually serve your content.


Getting new certs with nginx.

There doesn’t seem to be quite the same level of automation with Nginx hosted sites than the Apache ones.

sudo certbot certonly --webroot -d d.consumium.org --webroot-path /var/www/diaspora

Is what I used to successfully get the new certificates in place.

Installation of GNU MediaGoblin on Debian GNU/Linux

Installing GNU MediaGoblin 0.9.0 with Py3 support and using Postgreql as RDBMS on GNU/Linux system

Installing GNU MediaGoblin (over and over again)

Here I document how to install the GNU MediaGobin 0.9.0, The Three Gobliners on Debian GNU/Linux.

Why over and over again?

Short answer: The installation instructions given are required to be read to be completely understood. So I’ll be installing again a third time.

  • GNU MediaGoblin 0.8.0 I accidentally set to use SQLite, instead of Postgresql the intended database backend. No migration script exists so reinstall was needed
  • GNU MediaGoblin 0.9.0 I managed to install the 0.9.0 using Py2 instead of Py3.
  • GNU MediaGoblin 0.9.0 with python3 version is what I am aiming at the third time around installing
    UPDATE: Seems installation of GNU MediaGoblin 0.9.0 with python3 support is currently impossible if the idea was to use flup and fcgi. Follow this ticket for updates on the situation.

Since the installation using python3 is impossible at the moment I have installed the py2 version instead at https://media.consumium.org. using py2, Nginx and fcgi for serving content.

Previously installed on the server are Nginx for webserver with TLS security enabled. Services already running on the server are https://hub.consumium.org (Hubzilla), https://d.consumium.org (diaspora), https://social.consumium.org (GNU social) and https://friendica.consumium.org so some of the dependencies are likely there.


I run into some problems which caused that the Postgresql cluster was not created and started. I got good help from StuckMojo @ #postgresql @ freenode irc.

I fixed the situation by running

  • ‘nano /etc/locale.gen’ and uncommented the Finnish and US English locales
  • ‘sudo locale-gen’ generated the locales according to /etc/locale.gen
  • ‘sudo pg_createcluster –locale=en_US.UTF-8 9.4 main’ creates the cluster and
  • ‘sudo pg_ctlcluster 9.4 main start’ starts the cluster
  • check its status with ‘sudo pg_lsclusters’

Now should be ready to create the database user and the database.

 

Migration of various free social media from GNU/Linux server to server

Migration procedure for moving various free social media from a GNU/Linux to another GNU/Linux system and end results

Consumium free social medias and Consumerium consumer empowerment effort
Current logo for Consumium free social media services and Consumerium – Enhancing Consumer Informedness – effort

This is the record for what went well and what didn’t go well in the process of migrating the *.consumium.org sites (except https://c.consumium.org, that’s in Espoo)

This migration was completed on 2016-06-09. I would like to extend a warm  you to https://TransIP.eu for showing compassion in my predicament and offering to credit me some of the costs incurred by requiring 2 servers for a period of a time.

<spam>Their operation is really top-notch and I have never had outages with them that I would not have been responsible. Ever since I started hosting free social media with them in July 2013 the service has been outstanding and their control panel does include ability to take snapshots of system disks and a VNC just in case someone is not comfortable working with cli. The first time I saw the VNC in the control panel and it started to show the Debian GNU/Linux white-on-black bootup in my browser I was impressed.. Then it moved to run level 6 and I was naturally like “Whoa! It can do that!”. TransIP.eu is maybe not the most inexpensive hosting guys out there at the moment but I tell you their service level and its consistency are worth all the extra money. SSD system disks are spaceous and very fast and just as soon as http://maidsafe.net/  starts I’ll be purchasing at least one 10€ unit of 2,000GB big storage (which can be grown to 400,000GB, slightly under 400TB). Scp’ing between 2 servers in the same data center in Netherlands I was able to clock 101 Mbit/s speed. That is almost a gigabit / second, normal HDD couldn’t handle that.</spam>

Debian8 -> Debian8 migration of 4 free social media instances. Debian GNU/Linux, Nginx for web server, MariaDB for RMDBS and ruby, PHP and python as langauges the services run on

Migration of diaspora* to a new server

  • https://d.consumium.org (how to install diaspora* freesome on Debian GNU/Linux)
    diaspora* is the biggest and best known of the free social media. It has innovative features though is somewhat limited due to the creators thinking really hard about protecting the consumer from possible privacy related threats. The software is high quality and reliable. It uses a asymmetric sharing arrangement that is diametrical to twitter’s

    The original raison d’être for the old server called Debian7. The name is not very well chosen and misleading since the machine was dist-upgraded to Debian8 stable without hick-ups. Diaspora* was originally installed in July 2013 which at the time took couple of days

  • Grabbed the database, app/views/home and public/uploads and inserted those into place and the pod looks fine now after the migration.
  • Email was more of an hassle and is covered in a separate paragraph you’ll find down this page.

Migration of GNU social to a new server

  • https://social.consumium.org (how to install GNU social freesome) (How I originally installed GNU social)
    GNU social is a no-nonsense microblogging platform that is simple to grasp. Unfortunately it does not work very at the moment.

    – GNU social is a handy microblogging service.This instance was installed in 2016. Should pose no problems. MySQL was replaced with MariaDB during installation of this with no problems. Update: GNU social migration was the first one to be done. Grabbed the database (which contains the confs) and the ‘avatar’ and ‘files’ directories. Shut down. Put those in place and restart web server and GNU social was up with apparently all the old information from the previous box.

  • If you are getting an Error 400: After the migration the GNU social has been doing the same thing as before.. It often when trying to microblog gives an error “400”. Here one just needs to know to hit ctrl-r, no need to even hit ctrl-a ctrl-c, ctrl-r, ctrl-v as the software preserves what was written into the textbox.

Interesting point about Hubzilla and Friendica

Friendica and Hubzilla leverage the same instructional capital and best-practice which leads to that their installation instructions have many portions in common.


Migration of Hubzilla to a new server

  • https://hub.consumium.org – (how to install Hubzilla freesome)
    Hubzilla is a very high quality software and it has always worked just like the label said. It’s use of channels is intuitive as a way of interacting with other people.

    This will probably not have the old database restored because when I originally installed this I didn’t realize the point is to have many many channels but just one login. Of course it might be possible to restore the database but manipulate it so that the Consum(er)ium relevant channels would be under the same user

  • Well I did restore the old database.
  • Pretty much everything that was needed for installation of Hubzilla was already there. Just needed to run ‘sudo aptitude install mcrypt php5-mcrypt’ and installed the Hubzilla, Stopped Nginx and dropped in the database and the user uploads located in /var/www/hubzilla/store and it seems to work fine.

Migration of Friendica to a new server

https://friendica.consumium.org (how to install Friendica free social media)

Friendica is the least learning curve free social media solution for the people escaping Facebook more and more often.

the freesome of least steep learning curve for the people who want to free themselves of Facebook every now and then.

 

Friendica migration did not require copying over more than just the database as Friendica saves the uploaded files in the database and not flat file system.


Dealing with outgoing and incoming email

Getting email arrangements to work in a safe and reasonable way is by no means as easy as one may think at start. diaspora* email was configured to use SMTP over a TLS encrypted hop over to https://gandi.net‘s SMTP server. Took a while to figure out but I am guessing this will make the email look better to spam filters as the “origin” is under the same domain as the machines given in the MX records in DNS to be the Mail eXchange servers for consumium.org

‘sudo aptitude install sendmail’ installs sendmail, an MTA this is apparently all that is needed for PHP’s mail()-function to work.



The migration plan (and how it went)

(Note: to lazily get all the dependencies and hope there wasn’t old junk you could follow this post http://juboblo.gr/index.php/2015/12/02/original-howto-migrate-gnulinux-to-bigger-disk-with-clean-install-and-grab-all-apt-gettable-software-settings-and-files/)

Migration of system settings

  • Update services to latest version so you get the same exact version when you reinstall each service from latest release [✔]
  • Grab TLS key and cert – Remember to keep the key safe [✔] (note: exposing the server.key usually kept in /etc/ssl/private is very dangerous as it will expose all communications encrypted with that key)
  • Grab firewall settings allowing traffic to 22, 80 and 443 [✔] NMAP security scanner is great copyleft free tool for looking at this. tip: ‘nmap localhost’ inside the firewall and ‘nmap the IP address” from outside the firewall will be very useful scans for verifying firewall settings.
  • Grab confs:
  • /etc/nginx/nginx.conf [✔]
  • /etc/nginx/sites-enabled/nginx.conf [✔]
  • Grab home dir [✔]
  • Grab logs [✔]
  • /var/log/nginx/access.log [✔]
  • /var/log/nginx/error.log [✔]
  • Then decided to grab all of /var/log into a .tar.gz, Is only logs, cannot hurt and  [✔]
  • Mass grab /etc and /var/www for later reference when the old server is recycled and resources returned to cloud.
  • Get new server. [✔] Remember to install an ssh server when installing the software or you’ll be unable to access via ssh. Only if hosting guys provide a Virtual Network Console you can fix this problem there
  • Add self to sudoers [✔]
  • Restore home dir contents [✔]
  • Install Nginx [✔]
  • Put logs, key, cert and nginx.conf in place [✔]

Repeat following steps for each service

  • Install dependencies [✔]
  • Install new service clean [✔]
  • NOTIFY USERS THAT NOW IS FEW HOURS OF DATA LOSS IF YOU POST Better idea: When all is ready with the new installation in place and you are thus ready to start the DNS change propagation tell people that the database will be frozen when the old machine is “unreachable” due to the DNS already pointing to the next machine.
  • Grab databases. Each database separately. [✔]
  • Grab user uploaded content and the custom landing page for d* [✔]
  • Insert grabbed database, confs, landing page, user uploaded content. [✔]