An overview of 7 tech trends reshaping enterprise architecture
Erik Bakstad, co-founder and interim CEO of Ardoq, had this to say in a 2021 TechTarget article on emerging trends in enterprise architecture: "We'll see tools going in different directions and having different focuses." He begins. Bakstad goes on to say,
You don’t draw your architecture. It is basically derived from the data you put into the tools. That opens up different uses for data analytics to create future-state scenarios, quantify the benefits to the business, and use that to make strategic decisions.
Enterprise architecture is a place of continuous change. Every time a new technology appears, the architectural landscape changes making it hard for architects, even those with years of experience, to keep up. It seems that every time you turn around there’s a new trend taking the profession by surprise. Some trends matter, some are no more than a bright shiny thing that draws our attention yet offers little value.
In this article we’re going to take a look at trends in enterprise architecture that matter. Through interviews with a number of industry experts we discovered that there are seven trends in modern enterprise architecture worth taking a look at. These trends are:
- Shifting the burden of computing to the edge
- Increased utilization of hybrid cloud for microservices and containers
- Incorporating DevSecOps into agile frameworks
- Growing acceptance of continuous integration and continuous delivery (CI/CD)
- More demand for the talents of Site Reliability Engineers
- The emergence of hyperautomation in enterprise architecture
- More effective deployments using AIOps
Let’s take a look at the details.
1. Edge computing
Amy Abatangle describes edge computing as “an evolution, not necessarily a revolution, of distributed computing.” She goes on to outline the key advantages of edge computing as:
- Reduced latency and bandwidth - Without having to backhaul traffic, apps at the network edge that can process the majority of data locally (for example, on IoT devices) can selectively connect to remote resources only when additional processing is needed, thereby lessening network requirements.
- Increased privacy, security, and resiliency - Keeping data local to the device may enable enhanced security for critical infrastructure and other applications where centralizing data is undesirable. However, device management, standardization, redundancy, and failover will be vital.
- Reduced cloud computing costs and dependency - Processing data locally with less reliance on centralized resources can potentially reduce costs.
Abatangle also adds,
Telcos are big proponents of edge computing and see it as an opportunity to rebalance their relationships with hyperscalers. Time will tell, since edge computing is still waiting for the “killer apps” that are going to accelerate the need for built-out edge resources.
Why should Enterprise Architects care about edge in the coming years? This architecture is redefining the way industries everywhere optimize data capture, as well as scale and automate their systems and core business processes. Edge technology is changing the way manufacturers process and analyze their production and equipment data. In retail, IoT is revolutionizing the customer experience by bringing the computing power closer to the buyer. At a time when the world needs it the most, edge computing is also helping the healthcare industry cut costs and improve productivity by optimizing the use of certain equipment, tracking and monitoring disease progression, and offering health advice based on data to improve wellness.
2. Hybrid cloud and microservices
While edge computing isn’t necessarily a new technology, it’s still a fairly new computing practice, not yet having gained the traction that hybrid cloud has. Hybrid cloud allows you to keep leveraging your on-premises infrastructure and your choice of a combination of public and private cloud services. It also enables you to scale on-demand and take advantage of AI/ML capabilities to help parse data faster.
If a cloud environment can use a hybrid model, then so can a microservices environment. Integrate the two, and you’ve got the power of your legacy systems without having to deal with the complex layers of those systems. How? Because a microservices architecture allows you to componentize the most beneficial parts of your application.
Microservice architecture, often used in cloud-native integration solutions, has made it easier for teams to code more efficiently, debug, and deploy faster, slowly replacing the massive, inflexible legacy systems. When managed correctly, microservices allow developers to better understand the source code without slowing down development because code review, application build, and deployment are streamlined compared to monolithic applications.
Of course, you can’t dismiss monoliths altogether when considering microservices. Containers make the decoupling of applications more convenient by abstracting them from the environment in which they actually run, keeping the way an application is deployed irrelevant to its target environment. But this decoupling is only as effective as its architecture. Organizations must be ready for the complexity “smaller” services bring. More services equal more resources, which means a bit more housekeeping for DevOps teams.
On the plus side though, microservices–particularly when designed as stateless–make for an incredibly scalable and distributed system, which can help organizations avoid the bottlenecks associated with traditional databases. The ability to choose how you can use microservices makes it ideal for enterprises who don't have the developer resources to migrate their monolithic applications to solely microservices. Therefore, given the complexity of microservices, it's best for enterprises to plan how they will use microservices.
With this in mind, traditional monolithic applications can still be implemented, so long as they’re flexible enough to keep up with the pace of existing architectures. Many big-name companies such as Netflix and Amazon have re-architected their monolithic applications using a microservices framework.
The cloud era of computing doesn’t mean enterprises can simply buy as they need and forget about application development. The solutions that AppDev teams craft must work for the cloud and a company’s on-premises data centers. Software Architects should be prepared to equip their development teams with the best tools, such as Kubernetes or OpenShift Container Platform, to help simplify the deployment and management of containerized applications in a hybrid cloud environment.
3. DevSecOps and agile frameworks
"The DevSecOps manifesto says that the reason to integrate security into dev and ops at all levels is to implement security with less friction, foster innovation, and make sure security and data privacy are not left behind," says Michael Calizo, an Associate Principal Solution Architect, on the complexity of DevSecOps. "Therefore, DevSecOps encourages security practitioners to adapt and change their old, existing security processes and procedures.
Agile helps business and development teams work together to improve the process of delivery. The engineering behind DevSecOps combines the capabilities of development, security, and operations to deliver applications to production more rapidly and efficiently. DevSecOps processes come down to the people involved. Enterprise Architects can help guide their organization toward a more "security-first" through these practices.
To help accelerate cultural change, the organization needs leaders and enthusiasts that will become agents of change. Embed these people in the dev, ops, and security teams to serve as advocates and champions for culture change. This will also establish a cross-functional team that will share successes and learnings with other teams to encourage wider adoption.
Architects can design a DevSecOps program in such a way that collaboration is just as much of a tool as is the use of containers or automation. For instance, Enterprise Architects could enforce policy-driven or policy-based automation to fuse security straight into the software delivery process. In doing this, both development and operations teams know and understand (based on the set policy) that they're both responsible for not only delivering quality and reliable products but are also expected to practice security in a more cohesive way with automated checks, for instance, rather than as a final step.
It's important to note that an organization can implement DevSecOps and not be Agile. For instance, an organization may be mostly Agile, but one department or business unit within the organization could also practice Waterfall project management. There's also Kanban (widely thought of as a type of Agile), and Scrum (another methodology similar to Agile), but with more emphasis on how fast work can get done, as opposed to how it can be incrementally broken down to get done. But in an ideal environment, an organization would practice both Agile and DevOps, and to practice both would require a cultural shift and greater understanding of the "engineering" aspects of development and the role that security plays in delivering better outcomes, or DevSecOps.
Continuous Integration (CI) and Continuous Delivery (CD) is another important aspect of improving the software development life cycle.
4. Continuous integration and continuous delivery (CI/CD)
Red Hat defines the steps that form continuous integration and continuous delivery (or deployment) pipelines as “distinct subsets of tasks grouped into what is known as a pipeline stage.” These stages typically include:
- Build - The application is compiled or transformed from source code to machine code.
- Test - At this stage, the code is tested, and less time is spent working on configuration files through automation.
- Release - The application is delivered to the repository.
- Deploy - The code is deployed to production.
- Validation and Compliance - For a build to be validated, the changes must comply with a company’s policies and development life cycle.
It’s important to note that CI/CD is not exclusive to DevOps. It’s a choice made by many folks working with containers and Kubernetes.
Alex Radka, Principal Engineer at Sentient Digital, Inc., a government technology solution provider, describes a CI/CD perspective through an Agile lens:
You want to take into account microservice design. Use smaller objects and classes. This makes it easier to test inputs and outputs (unit tests). Also, consider implementing a (data) mocking system. This allows you to do integration testing early on. It also forces you to focus on API definition as a mechanism to communicate with other teams/systems. You’re also going to want to create a testing framework. A testing framework is a sub-system by itself, but it won’t appear organically. A testing framework takes a lot of planning and needs the attention of senior developers or architects to get started. Test-driven design is also important to keep in mind. Even if you don’t actually write the test, think about how you will write the test, because it will affect every detail of the design.
Of course, no CI/CD process is perfect. There are always challenges to overcome, such as separate code repositories causing confusion between teams, automating load testing processes, and updating services. Luckily, mitigation methods for each of these challenges, such as containerization, release management to help pace deployments, and other deployment testing techniques such as blue-green or canary releases, help developers compare newer apps to their older versions.
5. Site Reliability Engineers
Ben Traynor, one of the the minds behind Site Reliability Engineer (SRE), describes SRE as “what happens when a software engineer is tasked with what used to be called operations.”
Put simply, development teams want to create awesome applications and release them to the world. Ops teams work hard to make sure these applications don’t mess things up. The problem in this, however, lies in the divide between the two IT functions. Where’s the responsibility for scalable and software-driven solutions to Ops challenges? That work falls into the job of the SREs, a team of critical thinkers with operations skills and coding skills.
Like the architect role, the SRE role requires a healthy combination of depth and breadth and an extensive background in engineering. The idea of SRE is not exclusive to a role, it’s also a practice. It can be thought of as the gap between the theory of DevOps and the practice of building systems: change-ready at any scale.
Pieter VanIperen, a veteran software architect and managing partner at PWV Consultants, defines hyperautomation as “...a growing trend, (that) encompasses Robotic Process Automation (RPA), artificial intelligence, machine learning and process mining.”
Hyperautomation, a nascent trend on the horizon, marries teams and processes through robotic process automation, artificial intelligence, machine learning, and process mining capabilities to automate as many business and IT processes as possible in a way that legacy business process automation cannot.
Leo Baker, CTO at Vendorland, states:
Hyperautomation is nothing like a random trend but a logical next step in the development of the digital enterprise paradigm. When I last checked, the RPA market revenues were nearing $5 billion and were projected to double in just two years, which is the sign of rising demand and more windows of opportunity opening. I see it reflected in automation consultants’ agendas for 2021-2025: software robots coupled with AI, low-code process automation platforms. Analytics are poised to take business process management up a notch, and progressive enterprises will definitely get hold of it for the sake of higher productivity, more rapid processes, and greater efficiency of their operations.
Baker goes on to discuss the successes of RPA, saying,
“There are a growing number of RPA success stories, particularly in industries heavily reliant on data and multi-party collaborative workflows. However, the truly ‘intelligent’ automation is still nascent. I expect hyperautomation to gain pace in the following year or two, with companies going for semi-autonomous systems with ‘humans in the loop’ where complex decision-making or validation is required.”
With a sharp rise in quantity and variety of inputs, automation and machine learning are incredibly instrumental in helping human analysts comb through the sea of data to point and separate signal from noise.
Artificial Intelligence for IT Operations (AIOps), or Algorithmic IT operations as Gartner initially coined it five years ago, is an analytic approach of many names. Gartner describes it as a method of “combining big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination.”
Padraig Byrne, Senior Director Analyst at Gartner, says that the sheer amount of data generated by IT infrastructure and applications that IT operations teams have to manage, often while working in disconnected silos, can make addressing complex issues that arise even more difficult. AIOps is a collection of tools that automates rapid data processing. Machine learning is then used to analyze the data to predict and alert you to issues.
It’s easy to think of AIOps as a product, but it’s actually a proactive set of practices. Open source monitoring tools are a great way to read the code and understand what it’s doing and how it’s working. Author of 6 misconceptions about AIOps, explained, Marcel Hild, an open source evangelist and Red Hat Solutions Architect, cites examples Prometheus for domain monitoring and other open source projects such as Loki and Jaegar to help with logs and traces.
The power of the people that make up development and IT operations teams is undoubtedly extraordinary, but there are some areas where our capabilities need the support of intelligent systems to provide actionable data and insights. With the burgeoning of DevOps has come the need for a functional AIOps strategy, which ITOps Times recommends IT teams can use to “predict and proactively address potential issues across complex, hybrid environments to optimize service levels and support the agile DevOps processes needed for the digital business.”
What trends are you seeing take shape in your organization?
The above trends are only a few of the many new IT architecture trends I have come across in my quest for the next great paradigm or process in IT architecture. But only you, as an Architect, Developer, or Engineer, can tell your story and tell it well.
What kind of practices have you and your teams adopted to support your IT architecture? Furthermore, what architectural concepts and tools do you wish more people knew about? Share what trends are affecting your work by contributing to Enable Architect.
Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.