Zum Hauptinhalt springen
Chris’ wirre Gedankenwelt

How I Got Into Go

It is a relatively short story, that is not even that entertaining.

In about 2015-ish (maybe a bit earlier), Go came more and more onto my radar. At the time, I was working in primarily with Python, and at the time, Python devs were right in the middle of the python2 to python3 migration (debacle?). UnicodeDecodeError anyone? So, a language with a v1 compatibility promise sounded very appealing. Btw, the promise stood the test of time. On the other hand, I was in building platforms to run applications and cloud something something. Seeing projects like Docker and Kubernetes evolving that are written in Go made it even more interesting to me.

I didn’t get the change to dive into Go at work, so I played around with it at home. There are some traces of reading data from an sht7x temperature and humidity sensor on my computer. And what should I say… coming from Python, I didn’t like Go particularly. Despite that, I kept Go on my radar. At the time (and still am up to date), I was pretty much in listening to podcasts. Maybe because of this combination, the release of the very first episode of Gotime didn’t pass me unnoticed. I fell in love with the OG crew, and have listened to each episode till today. Besides being fun and entertaining, it kept me up to date with the ecosystem. All of that helped, when I started to write more and more Go, got involved in the early phases of Cluster API for OpenStack and the Cloud Provider OpenStack.

These days, I don’t write much code. But when I do, I have the tendency to reach for Go more often than not. That might be a result of needing to write glue related to Kubernetes, where Go still is the lingua franca. But even outside of that ecosystem. I use Go for cli and TUI apps, using the fantastic bubbletea. Distributing just single binaries is so easy! Writing code to talk to OpenStack APIs, I somehow prefer gophercloud over Python and openstacksdk.

Today, together Go and Python are in pretty much my first choice for almost all programming I do. Doing Python for about 20 (😲 😳 😅), I still like the concepts and how fast you can get going with it. Especially if you are in a phase of prototyping. And nothing beats the repl 😄

One more thing about Gotime. Gotime didn’t just hooked me into Go, but it also brought me into Changelog universe, which I was not aware of before. Till today, I have listened to many shows, and am a ++ member.

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

State of Bike Workshops in 2024

Frustrierend!

Es war mal wieder so weit. Meine Schwiegermutter hat ihr Rad in die Werkstatt gegeben, weil “Es in den mittleren Gängen rattert”. Sie bekam das Rad wieder zurück, es sei alles in Ordnung.

Jetzt weiß ich nicht ob mein Anspruch zu hoch ist. Aber ich bin der Meinung, dass ein Mechaniker es bemerken darf wenn nur 7 von 9 Gängen erreichbar sind und sowohl das kleinste als auch das größte Ritzel nicht. Und dass sich daher die 9 Klicks der Schaltung auf nur 7 Gänge verteilen. Ich weiß natürlich nicht ob das nicht bemerkt oder einfach nur ignoriert wurde 🤷

Wie dem auch sei. Die Lösung war denkbar einfach. Die Limitschrauben und Kabelspannung anpassen. Für mich… als nicht-Mechaniker ohne Arbeitsständer war das ein Job von 20 Minuten. Mit Ständer wären es vermutlich nur 10.

Und das ist nicht das erste mal, dass ich ein Rad aus der Werkstatt zurück bekommen habe und erst einmal Hand anlegen musste. Beim letzten mal waren es komplett falsch eingestellte Scheibenbremsen. Die haben nicht nur Geräusche gemacht, sondern die Bremsenbeläge lagen am Rotor an 🙄

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

Urlaub in der Ratschings Region

Unseren Familiensommerurlaub verbringen wir in der Nähe von Sterzing. Weil wir kein Auto haben, versuchen wir uns immer Regionen auszusuchen die gut per Bahn und Bus erreichbar sind. So auch dieses Jahr. Von Berlin aus nach München, dort umsteigen in Richtung Italien. Von Brenner bis Sterzing drei Stationen Regionalbahn. Dann 20 Minuten Bus ins Ridnauntal. Wir sind fast ganz am Ende des “Sackgassentales”. Von hier aus können wir direkt von der Haustüre los wandern.

Das beste hier. Trotz der Lage am Ende des Tales, fährt einmal stündlich der Bus nach Sterzing. Das Busnetz hier ist hervorragend, so dass man auch Mal in ein anderes Tal wandern kann und mit dem Bus zurück fahren. Durch die ActiveCard komplett kostenlos! Heute z.B. sind wir um 8:45 nach Sterzing, dort mit der Gondel zum Rosskopf, oben 1.5 Stunden gewandert, eine Abfahrt mit der Sommerrodelbahn gemacht und um 12:40 für das olympische Handballfinale 🤾‍♂️ wieder zurück gewesen. Ob sich das gelohnt hat 🤷

Tipps:

  • Führung durch das Bergbaumuseum!
  • Schwimmbad (Hallen + Freibad Kombi)
  • Kletterkurs
  • Dämmerungsführung durch die Burg Reifenstein. Wir haben gelernt, dass es noch einen Ritterorden gibt der sich einmal im Jahr dort trifft 🧐

Was geht noch, aber nicht gemacht:

  • Mieten von E-Mountain bikes (auch für Kinder).
  • Kletterwald in Sterzing
Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

How Did I Get Into IT

Well. I would say, primarily because of my dad. I remember the days at home with a 14.4 baud modem, connecting to BTX. There was a time in the nineties, it felt like everyone should be on the Internet. Crazy, isn’t it?

In about 98 through to 2k, I had a very curious IT teacher. The lessons were not mandatory, but we had a computer room in school with let’s say about twenty 486 PCs. We learned some Basic and Pascal, and later HTML and JavaScript. Which isn’t a matter of course for a school to this day!

At home I also got an old affordable 386, and a box of SuSE Linux 5.2. I don’t remember me doing too much crazy stuff with it, besides what we learned at school.

End of 1999, again my dad, encouraged me to ask for an internship at a very small local ISP. I got it, and in succession was paid “to do computer stuff” for them once a week. In the summer of the following year, I started a 3 year training (a very German thing) there. After that, I was allowed to call myself a Fachinformatiker Anwendungsentwicklung.

Back in those days, Datacenters were some room in some buildings. In our case, our office was in the old kitchen of a former restaurant. Guess what, the server room was in the former cooling room. We had air conditioning with a bucket under a pipe. On hot summer weekends, one of us needed to go to the office on Sunday to empty the bucket. However. I learned so much in those three years. Not much in the associated school, where I was actually pretty average, but at work. We were four guys doing pretty much everything. Running an ISP with ISDN/modem dial-in, running web, Mail, DNS, and what not. Programming web applications with CGI/perl and php3. We automated the maintenance of our machines with perl scripts. I ran the tech support, later with a driver’s license, fixed the customer server on site. Did I mention, that we ran all of the servers and the dial in connections over a 2 Mbit/s line? Eventually, in about 2003, we managed to move everything out of the cooling room and into a real Datacenter. There were not many abstractions, or configuration management or what not. It was a time of figuring out stuff, and automating with a script where possible. 🤔 Not too different to today!

I’m still very thankful for being there at the time. Learned tons, without a very formal education. Later, I went to college and did my formal degree. But I’m very sure, most of my capabilities stem from those (initial) three years.

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

A short one on Rust

From time to time, I’m curious in some programming language. Once, it happened to be Rust, and I read the Rust book in the first lock down. Which I think is a really good source. Well done! Over the past years, I also read Rust in Action.

It did happen, that I didn’t write a single line of Rust so far, but read some code. A couple of weeks rustlings popped up onto my radar. I installed it, and walked through it. Only equipped with my theoretical knowledge, I made it up to section 20 so far without needing many hints.

I don’t really know what to I want to say, other than. Play around, try out new stuff, keep learning 😄

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

Another attempt to revive this blog

For me it is notoriously difficult to keep up with sharing stuff. On one hand, I think I don’t have much to share. But so do most of the people on the internet.

In the past years, my blog software was just outdated and I didn’t take the time to do anything about it. As a result. I had some topics to write about, but because everything on my side was broken, I didn’t write. At some point, I decided to try to switch to hugo. Mostly for procrastination purposes 😅 I wrote a couple of lines of go, to convert my stuff over from Nikola restructured text to hugo markdown. Which turns out to be a nice little programming challenge. After finding all the edge-cases with my very personal use of Nikola, things went relatively well. To get most functionality back, I had to add some special templates, partials and shortcodes. Nothing special really. But work to be done.

Because my day work is also with computers, I tend to do not too much in my spare time. I really prefer spending time with my 👨‍👩‍👦 or riding my 🚴‍♂️. As a result. At some point, I was halfway in the migration. Even more reasons for me to not blog 🤷

Today is the day. It should rain the entire weekend. I spent the last two days on the bike. Thursday a 70km road ride 🚴‍♂️, yesterday 50km on gravel 🚵‍♂️. The kids are busy meeting friends. No real excuse to not spend an hour or two to shape off the last obvious edges.

Let’s see what happens next. Maybe I manage to keep up with writing, maybe this is the last post for the next six years 🙈 🙊 🙉

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

About Load Balancing at GitHub

I just want to point to an interresting resource on load balancing. GitHub recently released GLB: GitHub's open source load balancer (GitHub Load Balancer Director and supporting tooling Repo), which is the actual implementation of what was descibed back in September 2016 Introducing the GitHub Load Balancer. It guides the reader through the challanges GitHub faces in the space of load balancing, and how they came to their solution, and what didn't work for them.

The load balancer at GitHub needs to be increadibly reliable. We all use it when cloning or pushing to a repo, which both could take a while. Cutting the connection for maintenance purposes would affect many people, especially those, working with bad internet access.

Even if you are not GitHub, this might be interresting. For instance, if you run OpenStack and depend on the Image-API (Glance) for VM snapshots, a restart of the load balancer results into a corrupt snapshot, which might be unnoticed until there is an urge to use the snapshot. For sure. One should verify if the snapshot works, but not everyone does. And even if one does, it is quite annoying to wait let's say an our for the upload, just to discover the time has been wasted.

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

Use HyperFIDO HYPERSECU on Linux

To use HyperFIDO HYPERSECU on Linux you must have access to the created device. So you need to add a udev rule to set up the permissions. There are two examples on the support page, but for the systemd-uacces method, one information is missing.

You can use the [HyperFIDO-k5-1.rules]{.title-ref} approach:

ACTION!="add|change", GOTO="u2f_end"

KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="096e", ATTRS{idProduct}=="0880", TAG+="uaccess"

LABEL="u2f_end" 

But you need to rename it to something smaller than 70- in [/etc/udev/rules.d/]{.title-ref}, because the uaccess-Tag is processed in [70-uaccess]{.title-ref}.

E.g. [/etc/udev/rules.d/69-hyperfido.rules]{.title-ref}.

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

Starting from scratch

This post is part of a series how we as SysEleven manage a large number of nodes. We deploy OpenStack on those nodes, but this could be basically everything.

For sure, this is not our first attempt to deploy a managable OpenStack platform. We not only deployed this platform, we also deployed a platform based on Virtuozzo, which is still in heavy use for our managed customers. We have a whole bunch of learnings from deploying and managing the mentioned platforms, which leds us to the decision to start from scratch.

In the beginning of this year, there is basically nothing, but a rough plan. Not even a single line of code, neither software, nor some kind of automation. With this project, we were able to break with everything we run in the company. We were (and still are) allowed to question each software and hardware decission. We started on a greenfield.

There may be much things you will put into question. Most of what we did is inpired by what others did. Nothing is new, or kind of rocket science. Sometimes we needed to take the shortest path, because we had a deadline to mid of the year to deploy a stable high available OpenStack platform.

So, where to start over? #

First of all, we needed to drill down the issues we encountered with the previous deployments. We went from vague ideas to concret problems with soft- and hardware solutions we used. But the larges problem was a lack of formal definition of what we want to build, and how we want to do that.

So the very first step was to change this. We wrote a whole bunch of blueprints, capturing a high level view of the cluster. Some of them were very precise, like the decision for Ansible as our primary configuration management system, although the former cluster was build with Puppet. Or the decission for network equipment, how we plug them together and how we operate it. Some other very specific blueprints described that we use a single mono-repo, how we manage the review process, everything we script has to be done in Python of Go, styleguides for those language and Ansible, how we handle dependencies in Ansible, that we are going to solve our current problem, and not every thinkable future problem, and so on and so forth. There were some vague blueprints about: We need a way to get an overview of the current cluster state. Not just what we expect. We need a system to initially configure and track our nodes.

All blueprints are meant to change. Not in a whole. But each time, someone diggs into the next topic, the blueprint is extended with the current knowledge and needs.

So we had a rough overview of what we need. We split the team into two groups. As we knew that we had to deploy a working OpenStack, one group started to deploy OpenStack, with all the components needed via Ansible. The primary goal still is to provide a high available OpenStack. The group I belong to works on the individual software we need to provide a basic configuration for our nodes, keep track of them and keep them up to date.

I joined the team after 8 month off, to take care of my boy. At that point, the basic blueprints were written and the team was consolidated. And to be honest, I am not sad to miss this process, and espacially not the path leading to this.

Where do we get from there? #

We dedicated three nodes to be our bootstrap nodes, so called bootinfra. They are mostly installed and configured by hand.

Within half of the year, we were able to plug a new node into a rack. This shows up in our MachineDB where we configure the location of the node. After this, a very basic configuration happens automagically. The node configures its designated fixed ip addresses, hostname and fqdn, some bgp configuration, setup of repos, setup of ssh configuration. Just very basic stuff to reach the node. The next step is still very manual. We assign the node to a group in our Ansible Inventory File, and run Ansible by hand.

From the Ansible point of view, there is no change since then, even there was much process in detail. But we were able to deploy two other cluster in the same manner. One of them is our lab, which shares the production bootinfra. That tends to be easier, but it threw up a whole bunch of problems. For the other cluster, we needed to automate much of our (so far handcrafted) bootinfra. Which now pays out, since we are about to replace the initial bootinfra nodes by automated ones.

Next time, I will write about our bootinfra. Not too much about the configuration, but which services we run, and for what reasons.

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.

Managing a large number of nodes

Imagine following setup. As surly mentions somewhen in this blog, we are running a cloud based on OpenStack. The team, I am working in, is responsible to provision new nodes to the cluster, and manage the lifecycle of the existing ones. We also deploy OpenStack and related services to our fleet. One hugh task is running OpenStack high available. This seems not too difficult, but also means, we have to make each component OpenStack depends on, HA as well. So we use Galera as Database, Quobyte as a distributed storage, clustered RabbitMQ, clustered Cassandra, Zookeeper, and things I may have forgotten.

I will write about some aspects of our setup, how we deploy our nodes, and how we keep them up to date.

Autor
Chris Glaubitz
Configuring, coding, debugging computers for a living. Riding bikes for fun.