Ansible makes life easier for sysadmins. In this article, I show you how Ansible makes managing hosts easier by adding a package repository (repo) and installing a package from it. But first, let me remind you how to do it without Ansible.
Add one repo to a host and install a package
Well, I guess most of you already know how this works. Anyway, here are some examples of enabling a repo on a single host and installing a package from it.
Example 1: Subscription-Manager and YUM
First, enable a repo with the subscription-manager and then install a package via yum with the following command:
$ sudo subscription-manager repos --enable=rhel-7-server-rpms
…
$ sudo yum install git
That's it. Easy, right? This example runs the commands manually, and it's simple enough for a single host.
Example 2: YUM only
I could temporarily enable a repo (if it is currently disabled in the configuration) and install a package from it with the following command:
$ sudo yum --enablerepo=rhel-7-server-rpms install git
And this one is easy, too, right?
Okay, but how do I add a repo to a remote host, and then install a package there? Well, I could do something like this:
$ ssh root@example.com yum --enablerepo=rhel-7-server-rpms install git
I guess that works, too.
Example 3: Configure multiple remote hosts
So what do I do when it comes to multiple remote hosts? Well, I could use a loop like:
$ for HOST in (host1 host2 host3); do
ssh root@$HOST yum --enablerepo=rhel-7-server-rpms install git
done
With this loop, git is installed as a serial job on host1, host2, and host3. But what if I have hundreds of hosts? That would be a long-running loop! And what if I have to enable different repos depending on whether a host belongs to production or testing? Hosts in testing may use different repos than hosts in production. Of course, I could create a shell script solution for that problem, too. Instead of going down that rabbit hole, I will show you how to accelerate your work by using Ansible.
And now the Ansible way
When I started with automation, it was easiest to run commands in parallel on my hosts. Let's say I have the following static inventory file with two groups and four hosts in it that looks like this:
[testing]
host1
host2
[production]
host3
host4
I could run the job from Example 3 above in parallel using the Ansible ad-hoc mode:
$ ansible all -m command -a 'yum --enablerepo=rhel-7-server-rpms install git'
The command module runs a given command in parallel on the hosts specified by a host pattern (all in this case).
Add a new repo and install a package
You may have noticed that I used the rhel-7-server-rpms repo in the examples above. It already exists in my yum configuration. In some cases, there is no configured repo, so I must specify one to use it. First, I create the .repo file. For example, I want to add a repo named rhel-t-stage to the remote hosts and install git from it. So I could go with the following playbook:
---
- hosts: testing
tasks:
- name: Add repo rhel-t-stage
yum_repositoriy:
name: rhel-t-stage
description: “Repo for hosts in testing”
baseurl: “http://mirror.example.com/rhel-t-stage”
gpgcheck: yes
gpgkey: file:///etc/pki/gpg-key
- name: Install git
yum:
name: git
state: latest
In the above playbook, I add the new repo to all hosts in the testing group from the inventory file. If I would like to add a different repo to all hosts in the production group, I copy the above playbook and modify it to fit the desired configuration for production.
But there is an even better solution!
Be aware of group_vars
For this example, I assume the following use case.
- Two repos called rhel-t-stage and custom-t-stage should be added to all hosts in the testing group.
- Two repos called rhel-p-stage and custom-p-stage should be added to all hosts in the production group.
Along with the inventory file above, I've created the group_vars directory. I have also created the testing and production files in it. Variables specified in these files are used during playbook runs for the hosts in the corresponding groups from the inventory file. The following examples shine some light on it:
$ cat group_vars/testing
repo_name1: rhel-t-stage
repo_description1: RHEL packages for testing only
repo_baseurl1: http://repo.example.com/rhel-t-stage
repo_name2: custom-t-stage
repo_description2: Custom packages for testing only
repo_baseurl2: http://repo.example.com/custom-t-stage
$ cat group_vars/production
repo_name1: rhel-p-stage
repo_description1: RHEL packages for production only
repo_baseurl1: http://repo.example.com/rhel-p-stage
repo_name2: custom-p-stage
repo_description2: Custom packages for testing only
repo_baseurl2: http://repo.example.com/custom-p-stage
As you see, each line starts with a variable name followed by the value assigned to that variable. These are used in a playbook as follows:
---
- hosts: all
tasks:
- name: Add RHEL repo
yum_repository:
name: “{{ repo_name1 }}“
description: “{{ repo_description1 }}“
baseurl: "{{ repo_baseurl1 }}"
gpgcheck: yes
gpgkey: file:///etc/pki/RPM-GPG-KEY-example
- name: Add custom repo
yum_repository:
name: “{{ repo_name2 }}“
description: “{{ repo_description2 }}“
baseurl: "{{ repo_baseurl2 }}"
gpgcheck: yes
gpgkey: file:///etc/pki/RPM-GPG-KEY-example
Ansible looks up the variables in the group_vars directory and uses the specified values accordingly. This way, the correct repos are added to the testing and production systems. Because the modules used in this example are idempotent (like most Ansible modules), I don't have to worry whether these repos are already configured. If the repos are already set, Ansible recognizes this and doesn't change anything.
Wrap up
Using Ansible helped me to reduce the lines of code and, therefore, the risk of failures. Configuration files and playbooks based on YAML are easy to read. And in my opinion much easier to read then custom shell scripts.
For me, I like the Ansible way of doing things. What about you?
If you are looking for a place to start with automation, take a look at my previous article, Easing into Automation.
[ Looking for more on system automation? Get started with The Automated Enterprise, a free book from Red Hat. ]
저자 소개
Jörg has been a Sysadmin for over ten years now. His fields of operation include Virtualization (VMware), Linux System Administration and Automation (RHEL), Firewalling (Forcepoint), and Loadbalancing (F5). He is a member of the Red Hat Accelerators Community and author of his personal blog at https://www.my-it-brain.de.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래