Editor's note: This post was originally published on the Red Hat Developer blog.
Have you ever thought about having your own cloud environment? A local cloud is one of the best things you can do to better understand all the gears that run inside a highly productive environment. How do I know that? I’ve done it! And I’m ready to show you how I did, and how you can do it too.
Beep, beep! The alarm just sounded. It’s 4 a.m. and I can’t even feel my thoughts. I’ve got to leave without making noise. Luckily the airport is not so far from home.
I’m often traveling around the country showing a lot of stuff about DevOps, focusing on the Red Hat OpenShift Container Platform. It’s great work, but it has its risks. Since I don’t know exactly what the environment is where I will have to present, I’m constantly surrounded by many challenges: networks full of policies and proxies, buildings without reliable mobile Internet access, mobile quota exceeded, poor hotel Internet access; the list keeps growing. Having a lot of resources on the cloud can’t solve my problem if I can’t connect to the cloud. This is prone to disaster.
I can’t depend on cloud providers. I had to be my own cloud provider.
I have a great friend who happens to be a digital nomad like me. He is one of the best companions ever. There was a day we were browsing the Internet looking for a “Raspberry on steroids”. After finding a lot of tiny and powerful devices, he spotted our winner: the Intel Nuc Skull Canyon.
I told him: “One day I’ll have one of these. Imagine how great it will be to arrive at a customer’s site showing off this cute piece of hardware.”
This wasn’t the first time Claudio found a great piece of hardware. I’ve been using a GL.iNet router recommended by him (the GL-AR300M model). But the Nuc was Claudio’s biggest discovery. It instantly became the highest priority item on my buying list. After months of saving, I was finally able to buy a Nuc with an astonishing 32GB of RAM, plus 1TB of SSD. The next step was supposed to be simple: running Red Hat OpenShift on it.
In the beginning, I was using the classic oc cluster up
. It spawns an unstoppable beast that runs really smooth on such hardware. It was fast, but not fun. Especially because metrics and logging didn’t work. Some issues in the deployer pods prevented them from succeeding. I ended up writing a Red Hat Ansible Automation playbook to fix those issues using the oc debug
command. It was functional, but definitely not fun.
Red Hat OpenStack Platform sounded like a lot of fun to me, however installing it through Red Hat OpenStack Director on the Nuc wasn’t a feasible task. So, I went with the easy-peasy Packstack (please, don’t kill me).
Well… not so easy for a developer like me, who had near zero experience with network stuff.
After a lot of trial and error, I finally managed to configure Red Hat OpenStack Platform. Since “stuff” happens, I wrote a playbook to bring it up with a lab project containing all the stuff I needed to play with it. Then I made the whole thing available on GitHub. “The fun has begun.”
Ok! I had a way to install Red Hat OpenStack Platform, but how about installing Red Hat Enterprise Linux? Some web pages later I found the Anaconda’s Kickstart. It’s a way of automating the RHEL installation (and any other Linux distribution installed through Anaconda). Even better, RHEL writes a kickstart file after every installation. Then you just have to copy and paste the file to a drive named OEMDRV. Two flash drives (one with the RHEL image and the other with the Kickstart file) would trigger the automated install. But I didn’t want to use flash drives, I had two unused Android devices. Even more fun.
I’ve been using Android devices since 2010; my first one was a Motorola Quench running Android 1.5 (Cupcake). When I rooted it and saw the endless possibilities, my mind opened and I became fascinated by using Android devices for everything.
I started to search for a way to use an Android device as a flash drive, which led me to the awesome DriveDroid; it’s an app that emulates both flash drives and CD-ROM drives. I took my phones and loaded one with the RHEL image, the other with the Kickstart image, then I plugged both into the rear USB ports. I didn’t care about the battery because they were old phones.
So I thought: “Now I have two phones; just for installing RHEL? Why don’t I use them for something else?” Two Android devices can make a difference in the setup. I installed the fantastic Servers Ultimate on both phones to reduce the workload on the Nuc. Smb sharing on both phones allowed me to upload any new image I wanted to install; which led me to put in an HTTP server to serve installation files for my Docker images. A git server would do the rest of the trick by holding my inventory files for my open-sourced Red Hat Ansible Automation playbooks.
To finish up, I plugged the router into the USB-C port. The router takes some time to boot up (more time than the Nuc). This causes a network issue with the Nuc because the Red Hat OpenStack Platform needs Network Manager disabled, so the network needs to be available before the Nuc boots up. By attaching the router on the USB-C port, it can be powered without the Nuc itself being on. Then I’ve attached all cables and left the Nuc to be ready just by plugging in the power supply. Hook and loop fasteners completed the design, holding the Android devices and the router on top of the Nuc. Then it’s easy to put the package in a little handbag, which I need to open every time I go to the airport because its image on the x-ray is similar to a bomb! I’ve discovered this in the worst possible way.
Installing RHEL with the Red Hat OpenStack Platform (plus a fully working project) was only a matter of seven steps:
- Run DriveDroid on both phones.
- Turn on the Nuc.
- Wait until a push notification arrives at my main phone.
- Shutoff DriveDroid.
- Turn on the Nuc again.
- Run the Red Hat Ansible Automation playbook to install Red Hat OpenStack Platform.
- Wait until the second push notification arrives.
My Kickstart script turns off the Nuc after writing my ssh public key into the authorization keys and sending a notification through Pushover. I’ve been using Pushover for some time; it’s a straightforward way to get notified. That second push notification means a lot to me; it tells me my cloud environment is ready.
I finally managed to be my own cloud provider. With a lot of fun, and not a single drop of rum!
The next step, installing Red Hat OpenShift, wasn’t easy. After a lot of issues while running the playbook, I found the problem. The router (the GL-AR300M) is a great router, but it’s not a router for receiving the traffic of a PaaS. So I decided to create an internal DNS as an OpenStack instance.
From my laptop, I was using the external IP addresses. However, internally the instances will be talking with each other using only internal IP addresses instead of the external ones. A classic mistake for a developer like me.
With everything settled, I ran the playbook again… and got another error. Red Hat OpenShift wasn’t being able to talk with Red Hat OpenStack Platform in order to create volumes in Cinder to attach them to the nodes running pods. The problem was solved upstream; a single line telling Red Hat OpenShift to use the version v2
of the Red Hat OpenStack Block Storage API. So I wrote a little workaround to apply the fix to OCP 3.7, and wrapped up everything in a playbook and pushed to GitHub. With OCP 3.9 applying the fix from the upstream, I don’t need my custom fix anymore, just the regular playbooks.
The playbook can create all the instances with the Docker Storage mapped to a Cinder volume, all prereqs done and the Red Hat Ansible inventory file created, neat! With a single step, I was able to bring up a Red Hat OpenShift cluster. I ran it a lot of times on a weekend just to see things going on. That was “gigafun”!
With the cloud environment done, it was just a matter of installing the tools for my presentations. But the environment was so great that I’ve decided to bring my own working environment to it. My presentations became real case scenarios!
I don’t like to put labels on developers: backend, frontend, full stack…, it sounds like different types of heavy metal music. It’s all about coding, but one can have more expertise in some areas.
I love coding and I try to learn many programming languages. They’re tools. If you have the right tool for the job, you can get the job done with pleasure (and fun). That’s why I also love to code tools to get better at getting my job done. So, my work environment is quite easy to reproduce: a GitLab instance and a Nexus Repository. But that doesn’t mean I don’t have a value stream to deliver my tools.
I was abducted by the GitLab Runner. It’s fantastic! My Gogs instance went down and I never looked back. Don’t get me wrong, Gogs is a wonderful project, but the GitLab Runner provided me the best tool to get my job done (aka fun).
The runner is a connection between your code and your value stream (the pipeline). Every step on the pipeline runs inside a container created by the runner on top of Red Hat OpenShift. I created a set of build images to not only compile my code but also to release it. Pushover tells me everything about my pipeline. Everything now happens in a wonderful and powerful integration that helps me to engage people.
There are tons of ways to do something, but the way you show how it’s done is what engages people. It’s how magic is done!
A good card trick is straightforward. It doesn’t matter how you ask people to pick a card, or how nicely you shuffle the deck. In the end, it’s all about how you reveal the card. If you do it right, it will be unforgettable.
I love card tricks! You can easily engage an audience with a good trick and that’s how I do my presentations nowadays. They don’t expect me to come up with a little device and throw up an entire environment ready to rock. It’s my best trick! The fun-o-meter blew off!
Oh! My cab is almost here, I should probably finish my coffee. I have a presentation to do… and the best environment is now at my side.
Über den Autor
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit