Sélectionner une langue
Red Hat Enterprise Linux (RHEL) 7.3 has been out for a bit, but have you looked at what we’ve added in the Identity Management area for this release? I’m excited to say, we’ve added quite a bit!
In the past I have been talking about individual features in Identity Management (IdM) and System Security Services Daemon (SSSD) but this is really not how we prioritize our efforts nowadays. We look at customer requests, community efforts, and market trends and then define themes for the release. So what were these themes for RHEL 7.3?
Improvements to the Core
As our identity management solution matures customers start to deploy it in more sophisticated environments with more than fifty thousands systems or users, complex deeply nested group structure, advanced access control and sudo rules. In such environments, IdM and SSSD were not always meeting performance and scalability expectations. We wanted to correct that. Several efforts in different areas have been launched to make the solution work better for such complex deployments. In our test environment on a reference VM with 4GB of RAM and 8 cores we managed to improve:
- User and group operations with complex group structure - about 3 times faster
- Kerberos authentication - about 100 times faster
- Bulk user provisioning - about 20 times faster (relies on disabling memberOf plugin and rebuilding group membership after the bulk operation)
On the client side SSSD was slow in processing large objects in the cache, especially big groups with hundreds of members. The problem manifested itself most vividly when users performed the “ls -l” command on a directory with files owned by many different users. SSSD already had a workaround by means of ignore_group_members option but that was not enough. The structure of the SSSD cache was significantly reworked rendering twice as better results as in the past.
In addition to that, the underlying directory server includes a new experimental feature called Nunc Stans. The feature solves the problem of thousands of concurrent client connections that have been significantly affecting server performance. The feature is disabled by default. If you are interested in experimenting with this feature please contact your technical account manager to make us aware of your plans.
There is no limit to perfection so we will continue working on performance and scalability improvements in the follow-up releases.
DNS Related Enhancements
One of the limitations that large environments with several datacenters were facing was inability to limit which subset of servers the clients should prefer to connect to. It was possible to limit the set explicitly by providing the list of the preferred servers on the client side but that required additional configuration steps on every client which is an administrative overhead.
A better solution would have been to rely on DNS to identify the servers the client can connect to. But with the original DNS implementation there was no way to associate a set of clients with a set of servers so that clients would not go to the other side of the globe to connect to a server in a remote datacenter.
The DNS locations feature introduced in the current release solves this problem by allowing administrator to define a set of servers in the datacenter and to affiliate clients to this set of servers. The feature is functionally similar to the Active Directory capability called “sites.” The changes are in the IdM DNS server so the feature is available in the deployments that rely on DNS server provided by IdM to manage connected Linux clients.
In this release, the replica management area saw multiple significant improvements.
In the past, managing replicas in IdM was quite a challenge. Each replica only knew about its peers. There was no central place where all topology information was stored. As a result it was really hard to assess the state of the deployment and see which replicas connected to which other replicas. This changed. Now topology information is replicated and every replica in the deployment knows about the whole environment. To see the topology one can use a topology graph. Replication agreements can be added and removed with a mouse click.
In addition to topology information, the inventory of the installed components is also available now. In the past it was hard to see which servers have a CA or DNS server deployed. Now with the server roles report in the UI, the administrator can see which servers have which roles in the environment.
We also changed the replica deployment procedure because it was hard to automate properly. In the past the expectation was that replicas would be installed by humans that will type the administrative password. When you need to deploy replicas on demand this does not scale well.
Efforts to create Puppet scripts or Ansible playbooks for replica deployment also faced the problem of embedding passwords into the body of the module. Keeping in mind that modules and playbooks are usually source controlled and need to be accessed by different people, having highly sensitive passwords in them was an audit nightmare.
To address this issue, IdM introduced a new replica installation procedure also called replica promotion. The installer will lay out the client bits first. The client will register and get its identity. The existing master, knowing that a replica is being installed, would elevate privileges of the client to allow the client to convert itself to a replica. This process allows deployment of the replicas in a much more dynamic and secure fashion. Existing replication management utilities have been updated in a backward compatible way.
These replication management improvements are enabled automatically for the new installations. For the existing installations to take advantage of these features one needs to update all participating servers to Red Hat Enterprise Linux 7.3 and then change the domain level setting to 1.
Also many customers that are interested in deploying IdM have dozens of remote sites. To accommodate this the limit of supported servers in one deployment was increased from 20 to 60.
Continuing the trend that we started with implementing together with MIT the support of two factor OTP-based authentication over the Kerberos protocol, IdM and SSSD in Red Hat Enterprise Linux 7.3 bring in a new, revolutionary technology. This technology is called “Authentication Indicators.”
In the past all tickets created by the Kerberos server were born equal, regardless of what type of authentication was originally used. Now, authentication Indicators allow tagging the ticket in different ways, depending on whether single or multi factor authentication is used. This technology enables administrators to control which kerberized services are available to users depending on the type of the authentication. Using Authentication Indicators, one would be able to define a set of hosts and services that require two factor authentication and let users access other hosts and services with tickets acquired as a result of a single factor authentication.
Another improvement that is worth mentioning is the change to how IdM and SSSD communicate SUDO policies. In the past SSSD was able to work only with the traditional SUDO LDAP schema defined by the SUDO project. On the other hand, the schema that IdM uses to store SUDO information is different. It was designed to provide a better user experience and improve manageability. The side effect of this situation was that IdM had to create a special LDAP view to serve SUDO information to the clients including SSSD. This view added performance overhead and complexity to the solution. With the Red Hat Enterprise Linux 7.3 release, SSSD is now capable of working with the internal SUDO schema adopted by IdM. Once the clients are updated to the latest version, the special SUDO view on IdM servers can be disabled, freeing memory and boosting server performance.
Deploying clients in the cloud requires more flexibility with names that identify a system for Kerberos authentication. In many cases a system has an internal name assigned by a cloud provider and an external name visible outside the cloud. To be able to use multiple names for the same system or service, the Identity Management in Red Hat Enterprise Linux added the ability to define alternative names (Kerberos aliases) via the user interface and command line. With this feature, one can deploy a system in a cloud and use Kerberos to authenticate to the system or service from inside and outside the cloud.
SSSD is growing its responsibilities and it is becoming harder to operate and troubleshoot if something goes wrong. To make administrator's life easier, SSSD is now accompanied with a couple of new utilities. One utility allows fine grained management of the SSSD cache so that state of the cache can be easily inspected. The tool allows tweaking or removing individual objects and entries in the cache, without removing the cache altogether. Another tool, called sssctl, provides information about SSSD status: whether it is online or not and what servers it is currently communicating with.
In addition to the utilities, SSSD processing of sssd.conf have been improved. With this enhancement SSSD has a higher chance to automatically detect typos, missing values and misconfiguration introduced via sssd.conf. The logic is still basic, but the change lays a good foundation for the future improvements in this area.
With better sssd.conf parsing, SSSD also gained the ability to merge several sssd.conf configuration files that augment each other. This is useful when different snippets of the configuration come with different applications that rely on the SSSD service provided by the system. This way applications can augment or extend the main SSSD configuration without explicitly modifying it.
In Part 2, we’ll look at certificate management, interoperability, and Active Directory integration improvements you’ll find in RHEL 7.3.