Browse Source

Remove old to make space for a better replacement

Andreas Linz 3 years ago
94 changed files with 0 additions and 4688 deletions
  1. +0
  2. +0
  3. +0
  4. +0
  5. +0
  6. +0
  7. +0
  8. +0
  9. +0
  10. +0
  11. +0
  12. +0
  13. +0
  14. +0
  15. +0
  16. +0
  17. +0
  18. +0
  19. +0
  20. +0
  21. +0
  22. +0
  23. +0
  24. +0
  25. +0
  26. +0
  27. +0
  28. +0
  29. +0
  30. +0
  31. +0
  32. +0
  33. +0
  34. +0
  35. +0
  36. +0
  37. +0
  38. +0
  39. +0
  40. +0
  41. BIN
  42. BIN
  43. BIN
  44. BIN
  45. BIN
  46. +0
  47. BIN
  48. BIN
  49. BIN
  50. BIN
  51. BIN
  52. BIN
  53. BIN
  54. BIN
  55. BIN
  56. BIN
  57. BIN
  58. BIN
  59. BIN
  60. BIN
  61. BIN
  62. BIN
  63. +0
  64. BIN
  65. BIN
  66. BIN
  67. BIN
  68. BIN
  69. BIN
  70. BIN
  71. BIN
  72. BIN
  73. BIN
  74. BIN
  75. +0
  76. +0
  77. +0
  78. +0
  79. BIN
  80. +0
  81. +0
  82. +0
  83. +0
  84. BIN
  85. BIN
  86. BIN
  87. +0
  88. BIN
  89. BIN
  90. +0
  91. +0
  92. +0
  93. +0
  94. +0

+ 0
- 4
.gitignore View File

@ -1,4 +0,0 @@

+ 0
- 3
.gitmodules View File

@ -1,3 +0,0 @@
[submodule "themes/klingtnet"]
path = themes/klingtnet
url =

+ 0
- 13
Caddyfile.example View File

@ -1,13 +0,0 @@ {
basicauth /admin admin admin
hugo {
theme klingtnet
root public
log stdout
errors {
log stderr
404 404.html

+ 0
- 165

@ -1,165 +0,0 @@
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the

+ 0
- 12
Makefile View File

@ -1,12 +0,0 @@
.PHONY: clean
all: build
build: content themes
rm -r public
deploy: build
rsync --update --stats -r public/* --protect-args --chown=www-data:www-data kn:/var/www/sites/

+ 0
- 33 View File

@ -1,33 +0,0 @@
# - my place on the web since 2006™
[Take a visit](
## Installation
git clone --recursive`
Install hugo static site generator:
go get -v
Take a look at the [official installation instructions for hugo]( to get more details.
## Build
## Deploy
make deploy
## Notes
- `` was used to convert the restructured text formatted posts to markdown with the help of `pandoc`

+ 0
- 32
config.toml View File

@ -1,32 +0,0 @@
baseurl = ""
languageCode = "en-us"
title = ""
description = " - sounds nice"
theme = "klingtnet"
logo = "imgs/logo.svg"
homepage = "static/"
name = "Andreas Linz"
homepage = ""
name = "github"
url = ""
name = "keybase"
url = ""
name = "gogs"
url = ""
name = "syncthing"
url = ""
name = "reports"
url = ""
name = "files"
url = ""

+ 0
- 118
content/blog/ View File

@ -1,118 +0,0 @@
"date": "2014-11-04",
"description": "A quick guide how I tried to fill some potential security holes on my laptop to prepare it for the 31st Chaos Communication Congress.",
"slug": "31c3-preparations",
"tags": "31c3, chaos congress, ccc, preparations, security, firewall, ports, tips",
"title": "31c3 preparations"
I am going to attend at this years [Chaos Communication
Congress]( for the first time. Because there will
be a lot of <span class="strike">hackers</span> security engineers with
too much spare time and less respect for other peoples data. So I can
recommend, if you also take part at this years congress, to check your
system for potential security leaks. It is also good possibility to
think about encrypting (parts of) your data and setting up or renewing
your passwords.
I am not a security expert, so use the following commands on your own
risk. The sole purpose of this post is to serve as documentation for the
next time I have to check my Arch Linux system for potential security
open ports
The first thing to do is to *backup your data*, then to scan for open
ports on your machine:
sudo nmap -sS -sU -sY localhost
This scans for open TCP/UDP and SCTP ports. The output should look
something like this:
Starting Nmap 6.47 ( ) at 2014-11-04 19:14 CET
Nmap scan report for localhost (
Host is up (0.000014s latency).
Other addresses for localhost (not scanned):
rDNS record for localhost.localdomain
Not shown: 1994 closed ports, 52 filtered ports
5500/tcp open hotline
8000/tcp open http-alt
8080/tcp open http-proxy
68/udp open|filtered dhcpc
123/udp open ntp
5353/udp open|filtered zeroconf
Now we want to know which process belongs to which port, for this task
we need `ps` and `fuser`. Take e.g. port 8000:
ps -p $(fuser -n tcp 8000)
Because I've got a local python httpserver on port 8000 running—to
preview this blog while writing posts—the output doesn't surprise me:
14488 pts/1 00:00:00 python3
You should ask yourself what services from that list are really
essential as well as secure and stop everything that is not. It would be
a bad idea to run a self written web service with root rights on port
80, even if it has a password secured login.
setup a firewall
There is an excellent guide how to setup [UFW (Uncomplicated Firewall)
UFW is a front-end for iptables and makes the configuration dead easy.
network shares
Now its time to check for open
[samba]( or
To look for open samba share use `smbtree -N`, where `-N` surpresses the
password prompt. To disable them, edit your `/etc/samba/smb.conf` or
disable your samba service.
To disable an NFS share, remove or comment out the corresponding line.
quick tips
Raise the bar for thefts a little bit an use a BIOS password for your
laptop, to prevent thieves from bypassing your user password and
accessing your data using a simple boot disk/usb-stick running e.g.
[Knoppix]( When you are
once in the BIOS, disable booting from external devices as well.
If you haven't done it already then set a password for your ssh-keys!
Maybe someone steals your machine and has access to your them. I can't
even imagine ...
ssh-keygen -p -f keyfile
You don't have to encrypt your whole harddrive, but at least encrypt
your personal information. I am using
[encfs]( for this, it's a
userspace filesystem and really easy to setup.
Buy/Setup a VPN and enable the automatic VPN connection on your LAN and
wireless interfaces with NetworkManager.
Did I said before that you should *backup your data*? I am using
[ddrescue]( for this purpose.

+ 0
- 16
content/blog/ View File

@ -1,16 +0,0 @@
"date": "2015-12-27",
"description": "A collection of useful links for the chaos congress.",
"slug": "32c3-useful-links",
"tags": "32c3",
"title": "32c3 Useful Links"
- [live streams](
- 32c3 [event map](
- 32c3 [wiki
and the faster [mirror]( which is only
accessible from the 32c3 network
- indoor navigation [c3nav](

+ 0
- 65
content/blog/ View File

@ -1,65 +0,0 @@
"date": "2015-12-26",
"description": "How to access the 32c3 wireless network",
"slug": "32c3-wifi-settings",
"tags": "wifi wireless-lan network",
"title": "32c3 Wifi Settings"
This post is for those who are struggling to get a connection to the
32c3 wifi network. The 32c3 wiki has some problems at the moment but you
can view the network setup instructions using [googles
These are my network-manger settings (they are the same for the `32c3`
and `32c3-legacy`):
You need to use this certificate

+ 0
- 50
content/blog/ View File

@ -1,50 +0,0 @@
date = "2016-05-09T12:41:01+02:00"
title = "Fix Buzzing Sound in Behringer B2031A Studio Monitor"
Some weeks ago, one of my [B2031A]('s started to make strange noises after running more than half an hour and the speaker popped very loud when I switched it off an on within short time.
It was not the type of [high-pitched digital noise]( where you can hear your mouse cursor moving on the display, because the monitors or computers cheap power supply is injecting a lot of noise in the ground wire.
By the way, this can be fixed with a proper [DI-Unit]( I'm using [this one]( to isolate my computer from the amplifier.
However, one speaker was making buzzing sounds that got louder until the tweeter has shut off.
The source of the issue could be anything, but the speakers are already more than 10 years old, and failing electrolytic capacitors are a common problem for amplifiers and power supplies, so I decided to take a look and replace them if necessary.
Here is a list of tools and materials that you need to replace the capacitors:
- A philips screwdriver
- a small wrench (to remove the lock nut from the op-amp, a small plier will work, too)
- A soldering iron
- A desoldering pump is highly recommended
- (1mm) solder wire
- Thermal paste (to repaste the op-amps)
- Two 6.800µF 50V capacitors
I've used [TDK/Epcos B41231](, but they've other dimensions than the original ones (22x50mm instead of 25x40mm). I would try to get one with the same size because I had to improvise a bit to make them fit.
**DISCLAIMER**: If you are not a trained electrician then don't try to repair any electrical device that needs more than [extra-low voltage]( to operate because you risk getting a lethal electric shock.
I will not be responsible for damage to equipment, blown parts or personal injury that may result from the use of these instructions.
The disassembly was quite easy:
- Unscrew the amplifier case, starting with the outer screws and then remove the three dome headed screws
- Disconnect all cables from the board (they only fit in one direction which makes reconnecting very easy)
- Unscrew the op-amps from the heat sink
![Behringer B2031 amplifier board front](/imgs/b2031a_amp_board_front.jpg)
The main capacitors seem to be fine on the first look but, but there was almost no electrolyte left inside.
Empty capacitors will make a rattling sound when you shake them.
Another modification that I would have liked to do is to lower the threshold level of the auto power circuit.
The factory default setting is way too high, so the speaker always shuts off on silent passages.
Unfortunately, the resistor that is responsible for the auto power level is an SMD model and I don't have the appropriate tools to replace them.
Anyone else can try to replace the 47 ohms `R83`, that is located near `IC11`, to one with a lower resistance.
I can't provide a link to the schematics, but you can find them easily using google.
![Behringer B2031 amplifier board back side](/imgs/b2031a_amp_board_back.jpg)
The 22x50mm replacement capacitors where a little bit to tall to fit inside the case, so I had to improvise a bit.
My hacky solution was to solder some wires between them and the board.
Nonetheless, the speaker works fine and the buzzing noise is gone.
![Behringer B2031 amplifier board fixed](/imgs/b2031a_amp_fixed.jpg)

+ 0
- 25
content/blog/ View File

@ -1,25 +0,0 @@
"date": "2014-11-20",
"description": "Shortcuts for DuckDuckGo.",
"slug": "bang",
"tags": "DuckDuckGo, shortcut, search engine, tips",
"title": "Bang!"
Today I stumbled across this this
[article]( which describes how to use
the so called *!Bangs*. A *!Bang* is nothing more than a shortcut for a
specific websearch prefixed with an `!`.
[DuckDuckGo]( has hundreds of these shortcuts
integrated, take for example `!i` to search for images with google or
`!yt` to search on YouTube. This is really powerful,
`!wa plot sin(x)*sin(y)` plots a simple wave funtion using
You don't have to know all the shortcuts because their autocompletion
does that for you, like you can see in the picture below.
![duckduckgo searchbar with '!'](/imgs/duckduckgo_bang.png)
Now I only need to find out how I can request their webcrawler to
recrawl this website, because this was what I actually would have liked
to know.

+ 0
- 231
content/blog/ View File

@ -1,231 +0,0 @@
"date": "2014-11-19",
"description": "The configuration of my digitalocean droplet that serves this blog.",
"slug": "blog-setup-part-1-digitalocean",
"tags": "digitalocean, vps, ufw, nginx, subdomains, ubuntu",
"title": "Blog Setup Part 1 - Digitalocean"
[™]( is back after a long time and this
isn't the first post, but nevertheless I will show you in the first part
of this series how this blog is served. In the second part I will
describe how I configured [nikola]( and made my
custom blog theme using [sass](
I've investigated different webhosters before I went to digitalocean,
including [uberspace](, and a VPS at
Uberspace was fast, SSH access was possible, they had a nice and helpful
service and the best thing, you could pay what you want. The main
disadvantage was, that you won't get root permission, because they are
selling a shared webhosting service. The virtual private server at 1und1
had nice specs and was super cheap, at least for students, which means
1€ per year. But, regardless of the specs, their VPS was nearly half as
fast as my 5\$ Droplet \[1\]\_, it took almost an hour to setup a new
machine, they use custom linux images without
[docker]( support and the machine managment
website was a mess.
After the bad experience with 1und1 I wanted to try out digitalocean
because the people around me always recommended it and second because of
the [Student Developer Pack]( from
Before creating the droplet there was the choice of the OS. Currently
they are supporting Ubuntu, Debian, CentOS and CoreOS. Unfortunately
they don't provide [Arch]( anymore, so I
decided to choose [Ubuntu]( in version 14.10.
Additionally you can specify the datacenter to use, which is Amsterdam
in my case. After that it's time to setup your SSH public key on the
webinterface for the droplet or to [generate a new
key-pair]( Now
you should be able to ssh into your new droplet and change the timezone
according to your location using `dpkg-reconfigure tzdata` (quite
obvious command for changing the timezone … ).
A website is nothing without a webserver, so I had to make a choice
between [Apache]( and
[nginx]( They are both full-blown webservers, so the
choice is more a matter of personal preferences. Lastly I've decided to
give nginx a chance because the configuration seems to be much easier
(and the cool kids use it 😎). The [official
documentation]( has got installation
instructions for all kinds of operating systems, but my droplet runs
Ubuntu, so I will write only the instructions for this system \[2\]\_:
add-apt-repository ppa:nginx/stable
apt-get update
apt-get install nginx
Now it's a good time to update the packages:
`apt-get update && apt-get upgrade`.
Before I began to configure the webserver I've read about the common
[nginx pitfalls]( One thing that they've
said in this article was to distrust every article on the web about
nginx configuration. That's what you also should do regarding this post.
I've created a folder for each website and subdomain that should be
served. In my case it's one for my weblog and another one for the
reports that will be generated from the nginx logs using
[goaccess](, but more on this later.
mkdir -p /var/www/{www,reports}
I've changed the ownership of the freshly created directories to
`www-data` user and group.
chown -R www-data:www-data /var/www/
Nginx provides a sample configuration file that you can use as a basis
for your *server blocks* or virtual hosts, to say it in Apache terms. To
put it simply, a server block is a combination of server-name and
ip/port specification.
cp /etc/nginx/sites-available/default /etc/nginx/sites-available/
Now we have a copy of the base configuration file that you can edit with
the editor of your choice. We only need the last part that begins with
`Virtual Host configuration for`. It's good practice to
serve your main website from `www.domain.example` as well as
`domain.example`. It's possible to configure are redirect via [http
status code 301]( from the `www`
subdomain to the root domain or to add both urls to the `server_name`
parameter, which is what I've done. Don't forget to change the `root`
path to the directory you've created before:
`root /var/www/;`. Now the configuration file should
look something like this:
server {
listen 80;
listen [::]:80; # IPv6
root /var/www/;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
If you haven't done it before, you should now set the `A` and `AAAA`
(IPv6) records of your domain to point to the IP address of your
droplet. When you've done that you can make a symlink from the
config-file to nginx the `sites-enabled` directory to *enable* the
server block:
ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/
Restart nginx `service nginx restart` and your website should be served!
nginx log analytics
At first I don't wanted to use any analytics at all, but generating a
report from the nginx logs is dead easy with
[goaccess]( and doesn't involve injecting
third-party javascript into my website. Luckily, Ubuntu provides an
goaccess package so you can install it via `apt-get install goaccess`.
It isn't necessary to generate html reports with goaccess, because this
tool can show you the report right in the terminal, but for convenience
I want to them to be generated as an html file. My idea was pretty
straight-forward, using a cronjob that calls every 10 minutes a script
which generates a new html report using goaccess. This is the script
that calls goaccess:
if [ -e $WWW_ROOT/reports ] ; then
cat $LOG* | goaccess > $WWW_ROOT/reports/index.html
Because the logs are rotated I have to combine them, this is done with
`cat $LOG*` which then pipes its output into goaccess. Before running
the cronjob, the date- and log-format must be specified in
/etc/goaccess.conf, in my case uncommenting the default values was
enough. Now it's time to add the cronjob, this is done with `crontab -e`
and adding this line: `*/10 * * * * /path/to/your/`.
Alright, now we have fresh reports every 10 minutes. Because the reports
aren't meant to be available for the whole web I've configured a [basic
We are almost finished, the only thing that misses is a little bit of
optimization to the nginx configuration.
### optimization
Google [PageSpeed
showed me that I haven't activated gzip compression and caching.
Enabling gzip compression is easy, open your `/etc/nginx/nginx.conf` and
uncomment `gzip on;`, depending on the power of your server you could
also change the `gzip_comp_level`, but levels above 6 need much higher
processing power with minimally reduced filesize. The content-types that
should be compressed can be set under `gzip_types`.
Caching is slightly more complicated, but all I had to do was to add
this location directive to my server-block configuration
location ~*\.(css|js|gif|jpe?g|png|ttf|otf|woff)$ {
expires 7d;
add_header Cache-Control private;
I am providing a source-code link for every post on my weblog, with the
default mime.type settings you will always get an annoying download
dialog when you try to open the `.rst` source link. To fix this I had to
add a content-type `text/plain` for
[rst]( files in
Don't forget to restart nginx to make the changes take effect:
`service nginx restart`.
configure ufw
One last thing is to enable the firewall. Because
makes this super easy there is no excuse for not doing it. If you don't
want to block IPv6 you should change `IPv6` in `/etc/default/ufw` to
`no`. Ok, so lets start:
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow www
ufw enable
In Ubuntu `ufw enable` also creates an init.d script, so the firewall is
started automagically on boot. Enabling the firewall shows you—based on
the logs—how often someone/somewhat searches for open ports etc.,
sometimes this is a little bit scary. Maybe I will write an article
about the analysis of the firewall log.

+ 0
- 17
content/blog/ View File

@ -1,17 +0,0 @@
"date": "2014-10-27",
"description": "Hot to fix the wifi connection problem after standby.",
"slug": "broken-wifi-after-standby-in-archgnome",
"tags": "wifi, Arch, NetworkManager, Gnome, systemctl, systemd, fix",
"title": "Broken wifi after standby in Arch/Gnome"
Sometimes my laptop can't connect to the wifi network after waking up
from suspend. Furthermore the network icons are missing from Gnomes
panel. Simply restarting the `NetworkManager.service` via `systemctl`
doesn't help in most cases, but the following works for me:
sudo systemctl kill NetworkManager.service wpa_supplicant.service
sudo systemctl start NetworkManager.service wpa_supplicant.service

+ 0
- 17
content/blog/ View File

@ -1,17 +0,0 @@
"date": "2016-02-27",
"description": "How to downgrade any arch linux package.",
"slug": "how-to-downgrade-any-arch-linux-package",
"tags": "arch, linux, downgrade, package, pacman",
"title": "How to Downgrade any Arch Linux Package"
- go to the [Arch Linux Archive]( and find the package version you want to install
- let's say that I want to downgrade [SANE]( to `1.0.24-4`:
- the sane package is located under [`/packages/s/sane`](
- download the package: `curl --remote-name -Lsf ''`
- install it: `sudo pacman -U sane-1.0.24-4-x86_64.pkg.tar.xz`
- done!
- to ignore a specific package when you run a pacman update use the `--ignore` switch: `pacman -Syu --ignore sane`
- for more information, how to restore all your packages at a specific date amongst other, take a look at the [wiki](

+ 0
- 34
content/blog/ View File

@ -1,34 +0,0 @@
"date": "2015-05-06",
"description": "How to fix the broken DNS settings after updating NetworkManager to 1.0.2-1",
"slug": "fix-broken-internet-connection-after-networkmanager-102-1",
"tags": "network, fix, Linux, Arch",
"title": "Fix broken internet connection after NetworkManager 1.0.2-1 update"
It was a long time since my last post, but I don't want to keep the
following instructions for myself. NetworkManager has
[openresolv]( as
an optional dependency and worked without it flawlessly, till today. If
you're lucky then your internet connection will work after the update,
because the old nameserver settings in `/etc/resolv.conf` are be
compatible with the network you are connected to. However, thanks to
[this reddit
post]( I was
able to fix the problem.
The first step is to enter a working nameserver in your
`/etc/resolv.conf` by adding the following line `nameserver`
(which is the public DNS of google). Bytheway, the file should begin
with `# Generated by NetworkManager`, which changes to
`# Generated by resolvconf` after you finished the instructions.
Now it's time to install `openresolv` by using `pacman -S openresolv` or
whatever package manager you're using. If you don't want to temproarily
use the google DNS server, download the [openresolv
package]( on a
computer with working internet connection and install it with:
`pacman -U /path/to/openresolv.xz`. We're almost finished, the last step
is to restart NetworkManager using systemctl:
`systemctl restart NetworkManager`. Voilà, you have a working internet
connection again!

+ 0
- 151
content/blog/ View File

@ -1,151 +0,0 @@
"date": "2014-11-25",
"description": "Why I switched from Dropbox to Syncthing and how I organized my synchronized folders.",
"slug": "from-dropbox-to-syncthing",
"tags": "file synchronization, dropbox, syncthing, go",
"title": "From Dropbox to Syncthing"
I've used [Dropbox]( since about two and a half
years on my desktop and mobile devices and was pretty satisfied with it
most of the time. But, even before Condoleezza Rice \[1\]\_ was part of
[Dropbox's directors
board](, I
had a bad feeling about my data lying on their servers. Another downside
was the linux client, fortunately it's not half as bad as
[skype]( The
trayicon always disappeared under
[Gnome3](, clicking on a single
notification opened up a dozen instances of
[Nautilus]( and using closed-source
software means that I had to install it manually or use it from the
[Arch User
which I try to avoid whenever possible. Also the android client seems to
forget my login credentials from time to time.
One might say that I only had to encrypt my Dropbox folder in order to
reduce my worries about security. That's what I've done and it works
well using [EncFS]( as long as you
don't use your Dropbox on different Operating Systems, especially on
mobile. Distrusting a service provider and at the same time using a
service that they offer is wrong from the ground up.
Not everything was bad, the synchronization worked well and I can't
remember a moment when their servers were down. Getting public links for
my files was also a nice feature. The camera upload feature was also a
nice addition.
Besides the security point of view I would have really liked to share
folders only with read access and to get a lot more storage space. I
could have solved the last two points with a premium account but this
wasn't an option because I had enough unused server resources to host my
own synchronization service.
I thought about using [Seafile]( or
[ownCloud](, but both need a central server to work
and the latter is written in [php](, that is reason
enough for me to avoid it. Both of them are full-blown software and
provide a lot more features than simple file synchronisation, which
isn't bad at all, but I don't need those goodies.
The main difference between conventional file hosting services like
Dropbox and Syncthing is, that Syncthing is serverless. This means that
you can only sync files between two instances if both of them are online
at the same time. That's not very practical so I still have to use a
server that is running another Syncthing instance. Its lightweight,
doensn't have external dependencies and the basic configuration is done
in under a minute, so it doesn't hurt that much.
For Arch Linux: `pacman -S syncthing`. If you are running Windows
download the latest release from the [official github
repo]( and I strongly
recommend to download
[SyncthingTray]( as well.
To run it automatically after boot enable the systemd service:
`systemctl enable syncthing@USERNAME`.
Now you are ready to start (the service)
`systemctl start syncthing@USERNAME`. This should open your browser with
`http://localhost:8080/` and shows you the dashboard with looks
something like the image shown below, except that you should see only
one folder and device. The default sync folder is located under
`~/Sync`, you can remove it after configuring at least one more folder.
![the syncthing dashboard](/imgs/syncthing_dashboard.png){.kn-image}
At first it is not very easy to synchronize a folder but if you note
these few tips it will not be a problem at all \[2\]\_:
Add the devices you want to synchronize with by using the `Add Device` button in your Syncthing dashboard.
: Devices are identified by an ID that you can show when you click
on the *gear* symbol in the menu bar of the dashboard. You need
the IDs of all the machines that you want to add and vice versa.
Add the folder you want to synchronize and in the following dialog enable all the devices that you want to synchronize with.
: This has to be done on the other devices as well, but note that
the *name* of the folder, not the path, has to be the same
across all devices. By enabling the *Folder Master* option you
can synchronize the folder as *read-only*.
- You have to restart Syncthing after every folder you added, this can
also be done from the menu bar of the dashboard
Maybe you've asked yourself how the devices can talk to each other? The
answer is the *Global Discovery Server*, that is used to share the
addresses between the devices. If you want you can [run one by
There is a lot of progress in the project on github, so I am hopeful
that this procedure will be way more user friendly in the near future.
Directory structure
I have decided to add every subfolder of my base folder (`syncthing`) as
seperate entry in my syncthing configuration. This lets me choose the
optimal synchronization options for each folder. To see directly which
folders are shared with other people I have created the `shared`
├── audio
├── docs
├── dump
├── edu
├── gfx
│   ├── Pixel
│   └── Vector
├── mobile
├── shares
│   ├── LT
│   └── uni
└── wallpaper
Symbolic links will be copied as is, that means that Syncthing won't
follow those links and synchronize their content as Dropbox does. Maybe
this [behaviour has
changed]( in version
class="strike">Because the Arch package is exceptionally out-of-date I will update this post when the newest version is available.</span>
Syncthing 0.10.8 is out and symlinks won't be followed, they will be
copied as is and this should work for Windows as well.

+ 0
- 470
content/blog/ View File

@ -1,470 +0,0 @@
"date": "2015-02-02",
"description": "My notes to the official golang tour.",
"slug": "get-go-ing",
"tags": "go, golang, google",
"title": "Get Go-ing!"
This post is a [pandoc]( converted
version of my [get-go-ing](
github repo. It contains my example solutions and notes to the official
[golang tour](
Get Go'ing
Playground for my [first steps in Go](
- Go programs are made out of **packages**
- the `main` method must be in the **main** package
- inside of the **import** statement are the packages specified that
should be imported
- the last element of the *import path* is the package name,
by convention. `math/rand` imports the files from `math` that
begin with `package rand`
- in Go, a name is exported if it begins with a capital letter, e.g.
### [Functions](
- function definitions start with `func` followed by the function
name, the parameter list and the return value
- as opposed to C, the parameter name comes before the type, e.g.
`x int`
- [here
why they choosed this syntax
- if two or more consecutive parameters share the same type, you
can omit it from all but the last
- a function can return *any* number of values (like tuples
in python)
- **strings** are enquoted by doublequotes `"`
func add(a, b int) int {
return a+b
### [Variables & Types](
- the **var** statement declares a list of variables with the type
- it is allowed on *function* and *package* level (global)
- examples:
- `var a, b bool`
- initalizers can be used like this:
- `var a, b, c = true, false, "hej!"`
- note that the type can be omitted if the initializer is
- each variable from the initializer list can have a
different type
- var statements can be factored into blocks, similar to the
import statement, see [basictypes.go](./src/basictypes.go) for
an example
- variables declared without an explicit initial value will be
instantiated with their specific [zero
- inside a function the **short assignment** statement can be used:
`a := 100`
- **type conversions** can be done with `T(..)`, where `T` is the type
and inside of the parantheses is the value to convert, e.g.
#### [Function Values](
- functions can also be assigned as variable values:
square := func(x int) int {
return x*x
#### [Closures](
- a closure is a function value that references variables from outside
its body
func adder() func(int) int {
sum := 0
return func(x int) int {
sum += x
return sum
- the inner function can access the `sum` variable from the enclosing
function, even after the outer function has returned
### [Constants](
- declared using the `const` keyword
- **can't** be declared using the short assigment statement `:=`
- constants can be character, string boolean or numeric values
- numeric-constants are [*high-precision*
### [Loops](
- go has only one looping construct, the `for` loop
- to emulate a `while` loop leave the *pre* and *post* statements
empty: `for ; x < y; {}`, you can even omit the *semicolon*:
`for x < y {}`
- omit the loop conditions and you get an infinite loop: `for {}`
for i := 0; i < 10; i++ {
sum += i
### [Conditions](
- C-like but without the parentheses:
if x < y {
} else {
- you can write a pre-statement before the if-statement
- variables declared in this pre-statement are only visible inside the
scope of the if statement
if x := 10; x == 10 {
fmt.Println("It's only an example.")
- **switch-case** statements break automatically, unless you specfiy a
*falltrough* statement (`default` case)
- the *evaluation order* is from *top to bottom*
- a `switch` without condition is the same as `switch true` and can be
used for long if-else chains:
switch {
case t.Hour() < 12:
fmt.Println("Good morning!")
case t.Hour() < 17:
fmt.Println("Good afternoon.")
fmt.Println("Good evening.")
### [Pointers](
- **pointer declaration** is C-like: `*T`, where `T` is the type of a
value the pointer refers to
- **dereferencing** the `&` generates an pointer of the value it
refers to
- there is no **pointer arithmetic** in Go
var p *int
i := 42
p = &i
fmt.Println(*p) // prints 42
- example use cases:
- avoid copying large structs to a function by passing a pointer
to the struct to the function
- as the [Go FAQ](
says, it's **not**
because *the pointer is copied*, as well as every other
argument which is passed to the function
- in-place modification, say you want to modify elements of a
struct inside your function without returning it. I'm sure there
is a valid use case for this, but I would consider it *bad
practice* in most cases.
Structured Data
### [Structs](
- `struct literals` denotes a newly allocated struct
- you can list a subset using the `Name:` syntax: `Vertex{X: 3}`
- the indirection through struct pointers is
type Vertex struct {
X int
Y int
// instantiation
v := Vertex{1, 2}
v.X = 4
### [Arrays](
- an array of `n` elements with type `T` is declared like this
`[n]T`, e.g. `[100]rune`
- arrays **can't** be resized
- Go has an array slice syntax similar to pythons list slices:
p := []int{2, 3, 5, 7, 11, 13}
- `make([]T, l, c)` creates a slice with **initial length** `l`
and (optional) **capicity** `c`
- `len(s)` gives the *length* and `cap(s)` the *capacity* of slice `s`
- a `nil` slice ([FP]( has length
and capacity `0`
- a slice can be appended with `append(s []T, vs ...T) []T`, where the
first argument is a slice of type `T` and the following parameters
are `T` values
- looping over a slice:
x = []int {2, 4, 8}
for i, v := range x {
// i = index
// v = value of x[i]
- you can skip a loop variable when you assign `_` to it, like in
Python: `for _, v := range x {}`
### [Maps](
- map declaration looks like this: `map[T_key]T_value`, e.g.
- maps have to be created with `make(map_declaration)` before using
- you can use **map literals** to initalize a map like this:
var m2 = map[string]uint64{
"foo": 42,
"bar": 314,
- there **must** be a trailing comma behind the last value!
- insert `m[key] = elem`
- get `elem = m[key]`
- `delete(m, key)`
- check if a key is present: `elem, ok = m[key]`, where `ok` is `true`
if `key` is present in map `m`, otherwise `ok` is false and the
`elem` is the zero value of its type
### [Methods](
- there is **no class** construct in Go
- **but**, you can *define methods on* \[struct\] *types*, which is
pratically the same (see [OOP with
Ansi-C (pdf)]( apart from
the access modifiers
- the declaration looks like that from a function with an additional
**Method Receiver** between the `func` keyword and the *function
- you can call the method like you can access struct elements:
type Vertex struct {
X, Y float64
// func MethodReceiver MethodName(Params) ReturnValue
func (v Vertex) Abs() float64 {
return math.Sqrt(v.X*v.X + v.Y*v.Y)
- you can declare method on **any type from your package**, but not on
#### Methods with [Pointer Receiver](
- two main reasons for using pointer receivers:
- **call-by-reference**, as default the method gets a copy of the
struct (call-by-value)
- **modifying** the method receiver **in-place**. You should now
why you want to do this, because it's the explicit usage of
- you *can't* define the same method name for pointer and value type,
see the example below
type Decimal struct {
X float64
func (v Decimal) Double() float64 {
return 2 * v.X
func (v *Decimal) DoublePR() {
v.X = 2 * v.X
v := Decimal{3.14}
// call-by-value
fmt.Println(v, v.Double())
// use the pointer Receiver
// DoublePR() has mutated v in-place
prints out:
{3.14} 6.28
### [Interfaces](
- an **interface type** is defined by a set of methods
- a **type** implements an interface by **implementing its methods**
- interfaces are **satisfied implicitly**. There is no explicit
**implements** keyword (like in Java), therefore an interface is
satisfied if the type implements its methods.
- the equivalent of Java's `toString()` method is the `String()`
method from the `Stringer` interface:
type Stringer interface {
String() string
### [Errors]( (Exceptions in Go)
- `errors` is a built-in interface (similar to `Stringer`)
- error checking is done by validating if an error value is `nil`
(Go's null type):
i, err := strconv.Atoi("42")
if err != nil {
fmt.Printf("couldn't convert number: %v\n", err)
### [Web Servers](
- the [http package]( serves HTTP
requests using any value that implements `http.Handler`
- those values have to implement
`ServeHTTP(w http.ResponseWriter, r *http.Request)`
- [http Handler](src/exercise-http-handlers.go) example
func (s Struct) ServeHTTP(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, fmt.Sprintf("%s%s %s\n", s.Greeting, s.Punct, s.Who))
Concurrency mechanisms
### [Goroutines](
- a goroutine is a **lightweight thread**
- the name is wordplay of
- goroutines run in the same address space, so they have access to the
shared memory → need of synchronization/locks
- a goroutine is started with `go f()`, where `f` is an arbitrary
- f's arguments will be evaluated in the current goroutine
- f will be executed in the new goroutine
### [Channels](
- a channel is a **types pipe** (like pipes from the shell)
- a channel must be created before use:
`ch := make(chan type, bufferlen)`. The `bufferlen` parameter
is optional.
- you can send and receive values from a channel using the `<-`
- send `ch <- v`
- receive `v := <-ch`
- send and receive on channels is **blocking** (until the other side
is ready) by default
- a buffered channel blocks only when the buffer is full
- channels **can** be **closed** to indicate that no more values will
be send
- **only senders** should close channels!
- you can check if the second return value of a receive is `false`,
then the channel was closed: `v, ok := <-ch`
Loops until the channel was closed
c := make(chan type)
for v := range c {
// ...
- the `select` statement is like `switch-case` for channels
- if mutliple channels are ready at once, a random channel is chosen
- the `default` case is run if no other channel is ready (can be used
for non-blocking send/receive)
select {
case c <- x:
x, y = y, x+y
case <-quit:
### Miscellanous
- the `defer` statement defers the execution of a function until the
surrounding function returns
- deferred function calls are pushed on a stack and are executed in
**LIFO** order
- [more on defer](
- to build & run a Go file in one step use `go run file.go`
- Go files can be formatted automatically using the `gofmt` tool. On
default the formatted code is written to `stdout`, to overwrite the
source file use `gofmt -w file.go`.
- the execution environment of a compiled program is deterministic,
thus a *random generator* for example has to be seeded, otherwise it
will deliver the same number on every run of the program
Further Reading
- [go-koans]( lets you learn Go by
fixing test cases. Sounds boring but instead it's quite fun to fix

+ 0
- 84
content/blog/ View File

@ -1,84 +0,0 @@
"date": "2016-03-10",
"description": "How to setup goaccess to generate stats from Caddy logs.",
"slug": "basic-log-analysis-for-caddy-using-goaccess-and-systemd-timers",
"tags": "caddy, systemd, goaccess, logs",
"title": "Basic Log Analysis for Caddy Using goaccess and SystemD Timers"
This guide assumes that you're using journalctl to store [caddy]('s log output. If you're running caddy as SystemD service and set the `log` and `errors` directive `stdout/err` then this is the case.
At first we need a log dump to analyze, `journalctl` makes it very easy to get one:
journalctl --no-pager --since -7d --priority info --ouput cat --unit caddy > /tmp/caddy.log
- `--output=cat` omits the log metadata (timestamp, service name etc.)
- `--no-pager` prevents journalctl from opening `less` (or whatever pager you use)
- `--since=-7d` shows the log of the last 7 days. Omit this switch to get all log entries.
Last but not least we have to filter out the noise because [goaccess]( would otherwise refuse to parse our log output. The `--priority=info` switch will only show messages with log level `info` but don't worry, caddy logs 400 and 500 status codes to stdout.
[Installing goaccess]( is fairly straightforward because it's in the repos of all major linux distributions. Arch linux users can copy-paste `pacman -S goaccess`. caddy uses the [Common Log Format]( (CLF) by default, which is supported out of the box by goaccess. Now lets check if the toolchain works: `goaccess -f /tmp/caddy.log`. Remember to choose the CLF format in the next dialog. goaccess' [documentation]( is worth looking if you don't use the default log format or goaccess is complaining about your log dump.
It's time to write the goaccess config file, e.g. `~/.goaccessrc` to automatically set the log and date format. Here are the settings for caddy's CLF:
echo << EOL > ~/.goaccessrc
log-format %h %^[%d:%t %^] "%r" %s %b
date-format %d/%b/%Y
time-format %H:%M:%S
Web log analysis in the terminal is nice, but what we really want is an HTML report that we can view in our browser. To generate such a report run this `goaccess -p ~/.goaccessrc -f /tmp/caddy.log > /var/www/`.
Alright, now lets automate these steps by setting up a SystemD timer. You can also use a cronjob if you like.
cat << EOL > /etc/systemd/system/goaccess.timer
Description=Hourly generate web log report for caddy
If you want to get more into detail on how to use systemd timers then take a look in `man systemd.timers` or in the [Arch Wiki](
Don't forget that there has to be a corresponding service file for each timer (if not `Unit` option was set), so lets create this, too:
cat << EOL > /etc/systemd/system/goaccess.service
Description=Web log report generation for caddy
ExecStart=/bin/bash -c "journalctl --no-pager --output cat --priority info --unit caddy | goaccess -p ~/.goaccessrc > /var/www/"
Restart the systemd daemon and start and enable the timer:
systemctl daemon-reload
systemctl start --now goaccess.timer
# check if the timer was loaded
systemctl list-timers | grep goaccess
systemctl enable goaccess.timer
Here is, for the sake of convenience, an example caddy config for an [http basicauth]( protected reports subdomain:
```ini {
root /var/www/
basicauth / admin password

+ 0
- 129
content/blog/ View File

@ -1,129 +0,0 @@
"date": "2014-12-17",
"description": "A short summary about some basic algebraic structures.",
"slug": "groups-rings-and-fields",
"tags": "groups, rings, fields, abelian, monoid, math, algebra, mathematics",
"title": "Groups, Rings and Fields"
This semester I am taking a automata theory course at the university
which relies heavily on basic algebraic structures like
[rings]( and
[fields]( the
latter is better known as *Körper* in german lectures.
Sometimes I'm a little bit confused which structure needs which
properties to be satisfied. That's why I decided to write a short
summary about this topic and the process of writing this post also helps
me to remember it, thus I can kill two birds with one stone.
A group $(G, \cdot)$ is a set of elements $G$ and a binary operation
1. **Closure**
$\forall a,b \in G : a \cdot b \in G$
2. **Associativity**
$\forall a,b,c \in G : a \cdot (b \cdot c) = (a \cdot b) \cdot c$
3. **Identity**
$\forall a \exists 1 : ( a, 1 \in G ) \land ( a \cdot 1 = 1 \cdot a = a )$
4. **Inverse**
$\forall a \exists a^{-1} : ( a, a^{-1} \in G ) \land ( a \cdot a^{-1} = a^{-1} \cdot a = 1 )$
5. **Commutative**
$\forall a,b \in G : a \cdot b = b \cdot a$
- A **semigroup** only has to be *closed under the group operation*
$\cdot$ (1) and *associative* (2)
- A **monoid** is a semigroup plus the *identity element* (3)
- A monoid with the *inverse* (4) is called a **group**
- If the group is also *commutative* (5) then it's called an **abelian
A ring $(R, +, \cdot)$ is a set with elements $R$ and two binary
operations $+$ and $\cdot$.
1. $+$ is **associative**
$\forall a,b,c \in R : a + (b + c) = (a + b) + c$
2. $+$ is **commutative**
$\forall a,b \in R : a + b = b + a$
3. **Identity** for $+$
$\forall a \exists 1 : ( a, 1 \in R ) \land ( a + 1 = 1 + a = a )$
4. **Inverse** for $+$
$\forall a \exists a^{-1} : ( a, a^{-1} \in R ) \land ( a + a^{-1} = a^{-1} + a = 1 )$
5. Left and right **distributive**
\forall a,b,c \in R &:\\
& a \cdot (b + c) = (a \cdot b) + (a \cdot c) \land \\
& (b + c) \cdot a = (b \cdot a) + (c \cdot a)
6. $\cdot$ is **associative**
$\forall a,b,c \in R : (a \cdot b) \cdot c = a \cdot (b \cdot c)$
- If condition 4. is *not* satisfied (there is no additive inverse
element), then the structure is called a **semiring**.
- A **ring** has to satifsfy all six conditions
- If the $\cdot$ operation is also commutative the structure is called
**commutative ring**
A field $(F, +, \cdot)$ is a set with elements $F$ and two binary
operations $+$ and $\cdot$ that satisfies *all the conditions of a ring*
plus the following three:
1. $\cdot$ is **commutative**
$\forall a,b \in F : a \cdot b = b \cdot a$
2. $\cdot$ is **identity**
$\forall a \exists 1 : ( a, 1 \in F) \land ( a \cdot 1 = 1 \cdot a = a)$
3. $\cdot$ is **inverse**
$\forall a \exists a^{-1} : ( a, a^{-1} \in F ) \land ( a + a^{-1} = a^{-1} + a = 1 )$
- If the $\cdot$ operation is *not commutative*, then the structure is
called a **skew field**, **division algebra** or **Schiefkörper**
- [Group](
- [Ring](
- [Field axioms](

+ 0
- 93
content/blog/ View File

@ -1,93 +0,0 @@
"date": "2014-12-18",
"description": "How to enable IPv6 on your Ubuntu 14.04 droplet and how to set it up in nginx.",
"slug": "how-to-enable-ipv6-on-your-ubuntu-droplet",
"tags": "IPv6, droplet, digitalocean, ubuntu, nginx",
"title": "How to enable IPv6 on your Ubuntu droplet"
In [one of my previous posts](/posts/klingtnet-goes-ssl-and-spdy/) I
showed how to enable IPv6 in nginx, but I haven't tested it. I thought
it's enough to enable it in my droplet settings, setup the `AAAA` DNS
record and add the IPv6 listen directive to the nginx configuration.
Today I've checked through
[]( if
[]( is reachable via IPv6 and to my
surprise it wasn't. The problem was the missing `inet6` address entry in
the network interface configuration `/etc/network/interface`.
Setup your network interface
At first go to your droplet settings page and write down your public
IPv6 *address* and *gateway*.
SSH into your droplet and before making the settings permanent, try them
out by using `ip addr` like this:
ip -6 route add default via PUBLIC_IPv6_GATEWAY dev INTERFACE
- `INTERFACE` will be eth0 in most cases, you can view your interfaces
with `ip -6 addr show`
- everything else is viewed in your droplet settings
Now you should be able to ping your machine `ping6 -c 2 PUBLIC_IPv6`. If
the output looks similar to this, you're almost done:
$ ping6 -c 2 2a03:f00:2:ba9::17d:c001
PING 2a03:f00:2:ba9::17d:c001(2a03:f00:2:ba9::17d:c001) 56 data bytes
64 bytes from 2a03:f00:2:ba9::17d:c001: icmp_seq=1 ttl=56 time=33.8 ms
64 bytes from 2a03:f00:2:ba9::17d:c001: icmp_seq=2 ttl=56 time=37.1 ms
--- 2a03:f00:2:ba9::17d:c001 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 33.808/35.464/37.120/1.656 ms
Make the settings permanently by adding them to your
/etc/network/interfaces config file:
iface eth0 inet6 static
address 2a03:f00:2:ba9::17d:c001:c001
netmask 64
gateway 2a03:f00:2:ba9::1
autoconf 0
dns-nameservers 2001:4860:4860::8888 2001:4860:4860::884
The `dns-nameservers` entries are the [public google dns server
If you haven't done it yet, add an `AAAA` record to your DNS settings
and let it point to the *public IPv6 address* of your droplet. This
could take some time until it is promoted, but in my case it has worked
almost instantly.
Now you have to add the listen directives to your nginx website config,
`listen [::]:80;` for plain ol' HTTP and `listen [::]:443 ssl spdy;` if
you are [a cool kid](/posts/klingtnet-goes-ssl-and-spdy/).
Please remember to restart your nginx to make the changes take effect:
`service nginx restart`.
If you have a firewall installed make sure that *port 80 and 443 is
open*. If you have enabled IPv6 in your `/etc/default/ufw` this was
already done.
Have fun with IPv6!

+ 0
- 277
content/blog/ View File

@ -1,277 +0,0 @@
"lastmod": "2016-04-10",
"date": "2016-03-05",
"description": "The setup of my netcup vServer that runs Arch Linux, Docker and Caddy.",
"slug": "my-netcup-server-with-arch-linux-docker-and-caddy",
"tags": "netcup, caddy, arch, linux, server, docker, systemd",
"title": "My netcup server setup with Arch Linux, Docker and Caddy"
- login into the [vservercontrolpanel (vcp)](
- select your vServer
- open the `Image` tab
- select `Arch Linux` image and follow the instructions
- wait for the confirmation email
- ssh into the server with the given password `root@IP`
- update pacman-keys:
- intialize pacman-keys: `pacman-key --init`
- initialize dirmngr: `dirmngr < /dev/null`
- update pacman-keys: `pacman-keys --refresh`
- update the system: `pacman -Syu`
- install vim: `pacman -S vim ncurses` (or install emacs, or whatever you're up to)
- **IMPORTANT** re-allow rootlogin after the system update (we disable it later):
- add `PermitRootLogin yes` to your `/etc/ssh/sshd_config`
- restart the sshd service: `systemctl restart sshd`
- [set the locale]( to ~~english~~ the desired language:
- uncomment your language in `/etc/locale.gen`
- run `locale-gen`
- set the value of `LANG` in `/etc/locale.conf`
- relogin or reboot
- **NOTE** don't logout from your root session until you've setup your user and checked if `sudo`ing works
- add a user with sudo capabilities `useradd -m -g users -G wheel -s /bin/bash NAME`
- **IMPORTANT** allow sudo for wheel-group users:
- run `visudo`
- uncomment: `%wheel ALL=(ALL) ALL`
- quit `:wq`
- set a password for your user: `passwd NAME`
- generate an SSH keypair if you don't have one: `ssh-keygen -b 4096`
- enter a passphrase
- *NOTE*: you can omit the `~/.ssh/keyname` if you want to use the default `~/.ssh/id_rsa`
- you can change key name if you want: `vim ~/.ssh/keyname` and replace the `user@host`
- copy the key to your server: `ssh-copy-id -i ~/.ssh/keyname NAME@SERVER`
- try to login: `ssh NAME@SERVER` and check if sudo works: `sudo echo foo`
- if everything works fine disable the root login for SSH:
- change `PermitRootLogin yes` to `no`
- disable password based login:
PasswordAuthentication no
ChallengeResponseAuthentication no
# UsePAM no
# You don't have to disable PAM but it can't do much useful w/o
# password based login.
# Details:
- I personally also change the SSH daemon port to something different than 22
- update your `~/.ssh/config` with a section for your server:
Port 22 # or the port you've set
IdentityFile ~/.ssh/keyname
Compression yes
- check out if the `Host` setting works: `ssh SHORTNAME`
- restart sshd `systemctl restart sshd`
- **NOTE** now you can logout from your session and use your user account
- set your hostname: `hostnamectl set-hostname myhostname`<