Deploying a virtual TripleO standalone OpenStack system
TripleO stands for OpenStack on OpenStack and is one of the official OpenStack Deployment projects. Installing a TripleO standalone system can be a great way to create a proof of concept, home lab, or an environment to learn on for OpenStack. It is not, however, recommended for a production environment.
We will walk through the steps needed to create a standalone deployment of the OpenStack Victoria release utilizing TripleO on CentOS 8. We will then create the components required to launch and connect to a virtual machine (VM) successfully. Finally, we will write a script to clean up the deployment.
Pre-deployment configuration
Host machine
I used a RHEL 8.2 machine for the host in this scenario. You may need to adjust the steps slightly for Fedora or CentOS.
To take advantage of Cockpit to manage networks and virtual machines, start and enable the service, and then install the cockpit-machines
package:
sudo systemctl enable --now cockpit.socket
sudo yum install -y cockpit-machines
Check whether the netfilter
module is loaded in the kernel:
sudo modprobe br_netfilter
Enable IPv4 IP forwarding if it is not already loaded:
sudo nano /etc/sysctl.conf
net.ipv4.ip_forward = 1
Reload sysctl.conf
without rebooting:
sudo sysctl -p /etc/sysctl.conf
[ Readers also enjoyed: Getting started with software-defined networking ]
Networking
Before we begin, create a standalone network in addition to your default network. The new network will be your management network. You can adjust the following to your own environments:
Create the standalone network
Utilize 192.168.24.0/24 as the standalone network. First, create a standalone.xml
file:
sudo nano /tmp/standalone.xml
<network>
<name>standalone</name>
<forward mode='nat'>
<nat> <port start='1024' end='65535'/>
</nat>
</forward>
<ip address='192.168.24.0' netmask='255.255.255.0'>
</ip>
</network>
Next, utilize virsh
to define, enable, and start the external network:
sudo virsh net-define /tmp/standalone.xml
sudo virsh net-autostart standalone
sudo virsh net-start standalone
Standalone VM creation
As this deployment utilizes a VM versus bare metal, we need to create a VM on which to deploy our standalone environment.
The specs for the VM are:
- CentOS 8 (variant rhel8.2)
- 60 GB
- 8 RAM
- 4 CPUs
- Standalone network
When installing CentOS 8 on your standalone VM, make sure you do not install libvirt-tools
and use a minimal or server installation. You will also need to create a stack user.
Standalone configuration
Once TripleO has been deployed as a standalone system, you will not be able to SSH to the VM with your password. To prepare for that, you need to copy your SSH key to the stack user. Here is the command:
ssh-copy-id -i ~/.ssh/<your ssh key> stack@<standalone>
You need to configure the stack user for NOPASSWD in sudo
:
sudo echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack
sudo chmod 0440 /etc/sudoers.d/stack
The standalone machine needs a fully-qualified domain name (FQDN), which can be set as follows:
sudo hostnamectl set-hostname standalone.example.com
sudo hostnamectl set-hostname standalone.example.com --transient
Update your system and reboot it if there are any kernel changes:
sudo yum update -y
sudo reboot
Download and install the python-tripleo-repos
RPM from https://trunk.rdoproject.org/centos8/component/tripleo/current/
sudo yum install -y https://trunk.rdoproject.org/centos8/component/tripleo/current/python3-tripleo-repos-<version>.el8.noarch.rpm
sudo -E tripleo-repos -b victoria current
sudo yum install -y python3-tripleoclient
Configure and deploy
Standalone configuration
You need to create several configuration files before you can deploy your standalone environment.
The first file is the containers-prepare-parameters.yaml
file, which will be used to pull your containers. Use the TripleO client to create a base file:
OpenStack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameters.yaml
Next, update the push_destination to false and the namespace to pull from quay.io:
nano containers-prepare-parameters.yaml
push_destination: false
namespace: quay.io/tripleovictoria
Next, configure the network settings utilizing a single NIC configuration. Before you proceed, you need to determine the interface your standalone network is on. Note that the interface may not be configured yet, so it will be the interface without an IP.
ip addr
To reuse parameters during the configuration of the standalone_parameters.yaml
and then the installation, export the parameters into the buffer as follows:
export IP=192.168.24.2
export VIP=192.168.25.2
export NETMASK=24
export GATEWAY=192.168.24.1
export INTERFACE=<interface>
Now, create the standalone_parameters.yaml
file by using cat:
cat <<EOF > $HOME/standalone_parameters.yaml
parameter_defaults:
CloudName: $IP
# default gateway
ControlPlaneStaticRoutes:
- ip_netmask: 0.0.0.0/0
next_hop: $GATEWAY
default: true
Debug: true
DeploymentUser: $USER
DnsServers:
- 1.1.1.1
- 8.8.8.8
# needed for vip & pacemaker
KernelIpNonLocalBind: 1
DockerInsecureRegistryAddress:
- $IP:8787
NeutronPublicInterface: $INTERFACE
# domain name used by the host
CloudDomain: localdomain
NeutronDnsDomain: localdomain
# re-use ctlplane bridge for public net, defined in the standalone
# net config (do not change unless you know what you're doing)
NeutronBridgeMappings: datacentre:br-ctlplane
NeutronPhysicalBridge: br-ctlplane
# enable to force metadata for public net
#NeutronEnableForceMetadata: true
StandaloneEnableRoutedNetworks: false
StandaloneHomeDir: $HOME
InterfaceLocalMtu: 1500
# Needed if running in a VM, not needed if on baremetal
NovaComputeLibvirtType: qemu
EOF
Now you are ready to deploy the TripleO standalone environment using the following command:
sudo openstack tripleo deploy \
--templates \
--local-ip=$IP/$NETMASK \
--control-virtual-ip $VIP \
-e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml \
-e $HOME/containers-prepare-parameters.yaml \
-e $HOME/standalone_parameters.yaml \
--output-dir $HOME \
--standalone
Installation verification
You can now verify the OpenStack CLI:
export OS_CLOUD=standalone
openstack endpoint list
Creating a flavor, image, key pair, security group, network, and server
Now that you have installed and verified your standalone environment, it is ready to use. Create a small instance named myserver, which runs Cirros and the components needed to accomplish this. Before we start, configure the command line to access the deployment:
export OS_CLOUD=standalone
Flavor
You are now ready to configure the tiny flavor you are using and then verify its creation:
openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny
openstack flavor list
Image
Now that you have a flavor, download the Cirros image and then configure Glance to use it. Verify it was created:
wget https://download.cirros-cloud.net/0.5.0/cirros-0.5.0-x86_64-disk.img
openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.5.0-x86_64-disk.img
openstack image list
Key pair
In order to connect to our VMs without needing to type a password, create a new SSH key on the standalone system, upload it as a key pair named default, and verify it is created:
ssh-keygen
openstack keypair create --public-key ~/.ssh/id_rsa.pub default
openstack keypair list
Security group
The next task is to create a security group called basic with rules which allow us to SSH and ping
our instance:
openstack security group create basic
openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
openstack security group rule create --protocol icmp basic
openstack security group rule create --protocol udp --dst-port 53:53 basic
openstack security group list
openstack security group show default
Network
Before we create the network, export the following parameters for the standalone machine, as well as the public network, a private network, and a subnet we will create:
export GATEWAY=192.168.24.1
export STANDALONE_HOST=192.168.24.2
export PUBLIC_NETWORK_CIDR=192.168.24.0/24
export PRIVATE_NETWORK_CIDR=192.168.100.0/24
export PUBLIC_NET_START=192.168.24.4
export PUBLIC_NET_END=192.168.24.5
export DNS_SERVER=1.1.1.1
The public network we will create is an external network utilizing the datacentre physical network:
openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
openstack network list
We will now create an internal network named private and build a subnet called private-net:
openstack network create --internal private
openstack network list
openstack subnet create public-net --subnet-range $PUBLIC_NETWORK_CIDR --no-dhcp --gateway $GATEWAY --allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END --network public
openstack subnet create private-net --subnet-range $PRIVATE_NETWORK_CIDR --network private
openstack subnet list
The last steps are to create a router named vrouter and connect it to the public network as well as add it to the private-net subnet:
openstack router create vrouter
openstack router list
openstack router set vrouter --external-gateway public
openstack router add subnet vrouter private-net
openstack router show vrouter
Server
We are now ready to create a server named myserver utilizing the flavor, image, key pair, and private network we created:
openstack server create --flavor tiny --image cirros --key-name default --security-group basic --network private myserver
Utilize the server show
command focusing on the status column to determine when our server is ACTIVE or in ERROR:
openstack server show -c status myserver
Before we can connect to the server, we need to create a floating IP and add it to our server:
openstack floating ip create public
openstack server add floating ip myserver <IP>
As we have attached a key pair to our instance and opened up the SSH port in the server's security group, we can simply SSH to the server as the cirros user to test:
ssh cirros@<IP>
Clean up deployment
If you need to clean up your environment, remove the services and files installed for the standalone deployment. To do this, create a script called standalone-cleanup.sh
:
cat <<EOF > $HOME/standalone-cleanup.sh
#!/bin/bash
echo "Tearing down TripleO environment"
if type pcs &> /dev/null; then
sudo pcs cluster destroy
fi
if type podman &> /dev/null; then
echo "Removing podman containers and images (takes times...)"
sudo podman rm -af
sudo podman rmi -af
fi
sudo rm -rf \
/var/lib/tripleo-config \
/var/lib/config-data /var/lib/container-config-scripts \
/var/lib/container-puppet \
/var/lib/heat-config \
/var/lib/image-serve \
/var/lib/containers \
/etc/systemd/system/tripleo* \
/var/lib/mysql/*
sudo systemctl daemon-reload
EOF
Make the script executable:
chmod u+x standalone-cleanup.sh
Use the following command to run the cleanup:
./standalone-cleanup.sh
[ Learn the basics of using Kubernetes in this free cheat sheet. ]
Wrap up
TripleO can be useful for creating a lab or demonstration environment. There are a few pitfalls you must be careful of to make it work. This article covered the steps necessary to deploy, configure, and clean up TripleO using a RHEL-based environment.
Amy Marrich
Amy Marrich is a Principal Technical Marketing Manager at Red Hat. More about me