You have probably heard the narrative by now: AI is coming for your job. Headlines suggest that entire professions will soon disappear, replaced by increasingly capable models that can analyze, code, write, and reason faster than any human. But is that really the case?
In this blog post, I want to share a practical perspective based on my own daily work. Rather than speculating about the future in abstract terms, I looked at where AI can actually replace parts of my job today. The conclusion is something I will leave for you to decide.
First, a bit of context. I work as a senior specialist solutions architect in application development and AI at Red Hat and as an evangelist for platform engineering and sovereignty. My role is quite diverse. On any given week I might run technical workshops, prepare product positioning and competitive analyses, build demonstrations, speak at conferences, or write technical blog posts.
Much of my work revolves around helping organizations reduce cognitive load through platform engineering, adopt AI in a responsible and sovereign way, and build modern application platforms using technologies such as Red Hat OpenShift, Red Hat OpenShift AI, Kafka, and Red Hat OpenShift Service Mesh.
On top of that, I also continue to build and maintain demo applications and workshop assets for conferences, events, and hands-on sessions. This means I still write code myself, keep applications running, and stay closely connected to the practical reality of building and maintaining software.
This mix of responsibilities made my job an interesting experiment. If AI is going to fundamentally change knowledge work, my role should be a perfect candidate. Many of the tasks I perform are exactly the kind of activities that modern AI systems claim to automate.
So I decided to test that assumption: if we listen to the current marketing narratives around AI, then systems should already be able to replace me in several areas.
The first area is analysis and decision support. A large part of my job involves comparing technologies, platforms, and architectures. For example, I may need to analyze how a platform like OpenShift AI compares to hyperscaler-native AI services, or how Kafka-based event-driven architectures compare to more traditional integration approaches. These kinds of analyses also happen at the Chief Information Officer (CIO) and Chief Technology Officer (CTO) levels when organizations evaluate technology strategies, sovereignty requirements, or vendor lock-in risks. In theory, AI should be able to gather information, compare platforms, and produce structured insights, decisions, and strategy proposals.
The second area is coding. I regularly build demos, prototypes, and sometimes production features for smaller projects. Tasks include writing code, refactoring existing codebases, designing architecture structures, or integrating AI capabilities into applications using frameworks like Quarkus, Kafka, or KServe on OpenShift AI. These are the same types of activities performed by software developers and engineers every day. With the rise of AI coding assistants and autonomous coding agents, the obvious question appears—is AI going to replace software developers?
The third area is content creation. I write blog posts, prepare technical content, and occasionally create material for social media or conference talks. This is often presented as the easiest task to automate. Many influencers demonstrate pipelines where an AI system automatically generates a blog post every week and distributes it across social media channels.
So if we follow the promise, AI should already be able to handle much of this work.
To keep things practical, I decided to focus this exploration on 3 concrete domains:
- Can AI replace people when it comes to analysis and decision-making?
- Can AI replace people when it comes to coding and software development?
- Can AI replace people when it comes to writing blog posts or social media content?
Let’s explore these questions one by one and see where reality stands today.
(And hopefully, by the end of this story, we all still live happily ever after.)
Can AI replace people when it comes to analysis and decision-making?
To explore this in practice, I looked at a real-world example from my own work. I recently had to compare OpenShift with 2 competing Kubernetes solutions. Instead of only writing the analysis myself, I decided to run a small experiment. I produced 4 versions of the comparison: 1 written entirely by me, and 3 generated using leading AI models.
Before diving into the results, a brief introduction to OpenShift helps provide context.
What is Red Hat OpenShift?
Red Hat OpenShift is an enterprise-grade Kubernetes platform built on open source technologies. It was originally designed with a strong focus on reducing cognitive load for developers, which is one of the core principles of platform engineering.
Rather than being a fork of Kubernetes, OpenShift differentiates itself through integration. Running Kubernetes at enterprise scale requires far more than the core orchestration engine. Organizations need logging, security, storage, networking, event streaming (Kafka), service-to-service communication (OpenShift Service Mesh), and software delivery lifecycle tooling that all work together.
Red Hat contributes to many of these technologies in upstream open source communities and integrates them into the OpenShift platform. The result is a security-focused, enterprise-ready platform for applications, developer portals, trusted software supply chains, containers, virtualization, and AI workloads through OpenShift AI.
Because the platform is open source, organizations looking for sovereignty can run workloads across hybrid and multicloud environments and mitigate risks of proprietary lock-in, which becomes increasingly important in the context of regulations such as the EU AI Act, NIS2, DORA, and the US Cloud Act.
OpenShift is offered in multiple configurations, ranging from a Kubernetes engine to a full platform that includes advanced capabilities such as multicluster management, GitOps, security, and AI model serving.
With that context in place, we can return to the experiment.
The experiment
What immediately struck me was that OpenShift scored surprisingly poorly in several AI-generated analyses, even in areas where it should perform strongly. This pattern appeared across all models, although the details differed.
When I started questioning the results, the issues became clear. One model only evaluated the base Kubernetes layer and ignored the broader platform capabilities such as integrated service mesh, tracing, networking capabilities, monitoring, and observability. Another model compared a full enterprise platform with a near-vanilla Kubernetes distribution, which made the pricing comparison misleading.
Asking follow-up questions improved the results, but other issues remained. Each model used different evaluation criteria—some relied on outdated pricing or feature assumptions, and several comparisons ignored key enterprise dimensions such as operational risk, governance, and sovereignty.
A realistic comparison required a much broader and more nuanced analysis, including dimensions such as enterprise security, developer and operator experience, total cost of ownership, some operational risk factors, digital sovereignty considerations, and contributor backing in open source communities.
Interestingly, some of these dimensions were suggested by the AI models during the conversation. They helped expand the analysis, even though their initial conclusions were flawed.
Results
The lesson was clear. AI is a powerful tool to accelerate this kind of work and can highlight perspectives you might initially overlook, but the output still needs to be challenged by subject matter experts (SMEs).
Without that expertise, it's easy to accept conclusions that are incomplete, biased, or simply comparing apples with oranges, especially in complex domains like platform engineering, hybrid cloud, and AI platforms.
For me, AI sped up the work and helped structure the analysis, but it didn't replace human expertise.
Can AI replace people when it comes to coding and software development?
I am currently working on several blog posts in this domain (vibe coding and AI-assisted coding), where I start from scratch and try to build web applications almost entirely with AI.
For this example, I will use the creation of a demo for my Java conference talk, “AI without spaghetti: clean architecture in the age of AI.”
The demo
The demo starts from a legacy static website from the 1990s and gradually transforms it into a modern application running on OpenShift, integrating AI capabilities via OpenShift AI and event-driven communication using Kafka.
By the end, the application becomes a modern, scalable system with a chatbot and backend services following clean architecture principles.
The experiment
At first, I struggled quite a bit. I started with AI coding tools that produced impressive code snippets very quickly. It felt powerful enough to replace a large part of software engineering work.
But soon the limitations appeared. Moving from code generation to a production-ready system introduced challenges around architecture, maintainability, and integration. One refactor was executed by a couple of leading AI models to compare the results. What stood out, is that none of them delivered an acceptable result and they all needed rework.
In a second phase, I experimented with a multi-agent setup. One model acted as a product owner, another as an architect, and smaller open models ran locally or on GPU-enabled OpenShift clusters to implement tasks. This is where platform engineering really started to show its value. By running models on OpenShift AI, integrating with Kafka for event-driven workflows, and using GitOps pipelines, I could create a more controlled, scalable, and sovereign environment for AI-assisted development.
However, even with this setup, challenges remained. As the codebase grew, maintaining structure became harder. Context sizes increased, costs increased, and AI-generated code sometimes introduced inconsistencies or outdated practices.
This is exactly where clean architecture and platform engineering principles become essential. By structuring applications into modular components, smaller models can work more effectively, systems remain maintainable, and workloads can run in sovereign environments.
Results
At the beginning of the experiment, I was genuinely blown away by what AI could do, but when trying to turn the generated code into a production-ready system, the reality became more nuanced.
It worked, but it required strong guardrails: CI/CD pipelines, trusted software supply chains, dependency validation, and security checks were all areas where platforms like OpenShift provide strong value.
AI significantly accelerated the work. But it did not replace human expertise.
Can AI replace people when it comes to writing blog posts or social media content?
Last but not least, I regularly write blog posts about software engineering, platform engineering, AI, and technical business strategy.
Experiment 1: Writing blog posts
When I write blog posts today, I still start manually. I experimented with asking several AI models to generate full blog posts from a simple idea, but the results were usually the same—generated content that looked correct on the surface but felt hollow and meaningless. It rarely shared real insights or taught anything useful. None of those AI-generated posts ever made it to publication.
Where AI really helps is in improving my drafts. My initial drafts are often too long or grammatically incorrect, and AI can quickly help shorten and rephrase them. That works quite well, although I never publish the output directly. A final editing and curation round is always needed before I am satisfied with the result.
Another useful use case is asking AI to review the article. It can highlight unclear sections, identify missing concepts, or suggest where the content might be too technical or not technical enough for a specific audience. In that sense, AI acts more like a grammar checker and sparring partner than a full blog writer.
Something to keep in mind: when using AI to rephrase, I have noticed that the models often dilute the most critical strategic elements. Complex discussions around digital sovereignty, specific regulatory landscapes, or the nuances of proprietary vs. open models often get softened in the first pass. The AI tends to smooth the strategic edge that defines expert leadership.
Experiment 2: Writing social media posts
Social media is a different story. To be honest, I am slowly becoming allergic to fully AI-generated posts on platforms like LinkedIn. Many of them feel empty and repetitive. The same news item or trending topic suddenly appears in dozens of nearly identical posts written by different authors.
From a technical perspective, automating social media publishing is impressive. But in many cases, the actual value disappears. Instead of original ideas, you end up reading the same statements, analyses, and "breaking insights" repeated over and over again.
I still use AI occasionally in this context, mostly to shorten or slightly rephrase texts. But because of the growing amount of generic AI content online, I recently started publishing a weekly insight post where I intentionally try not to rely on AI.
If you are interested, you can find these posts on LinkedIn under the hashtag #maartensstory. And yes, that also means you will probably encounter a few spelling mistakes along the way.
Results
Once again, the pattern is clear. AI is a powerful assistant, but real value still comes from subject matter experts, especially in complex domains like AI adoption, platform engineering, and enterprise architecture.
My findings: Is AI going to replace us?
So, is AI going to replace us in its current form, as of March 2026?
To answer that question, we explored 3 practical areas.
Can AI replace people when it comes to analysis and decision-making?
No. SMEs are still required. AI can significantly accelerate analysis and act as a sparring partner, but the first answer produced by a model is rarely the one you should base strategic decisions on. As a CIO, CTO, architect, or technical lead, you still need expertise to question assumptions, validate comparisons, and verify you're not comparing apples with oranges.
Can AI replace people when it comes to coding and software development?
No. SMEs remain essential. Software development is much more than generating a few lines of code. It involves architecture, maintainability, security, and long-term ownership. AI can speed up implementation and even automate parts of development, but experienced engineers still provide enormous value, even in an increasingly agentic coding world. (And SMEs can help in controlling the AI bill in the end).
Can AI replace people when it comes to writing blog posts or social media content?
Again, no. SMEs remain a necessity. AI can generate text, but often without meaningful insight or originality. Used correctly, it can act as an amplifier, a grammar checker, or a sparring partner. Used incorrectly, it produces large volumes of hollow content that add little value.
So, is AI going to replace us?
Based on my experience, AI is not currently able to replace real human expertise. The organizations that win will not be those replacing experts with AI, but those that enable their experts to work more quickly, more safely, and more effectively with AI as a tool and assistant. AI can help experts work faster and explore ideas more efficiently, but it doesn't eliminate the need for expertise.
At least for now, I still believe this story ends with: and together they lived happily ever after.
Recurso
Introdução à IA empresarial: um guia para iniciantes
Sobre o autor
Maarten Vandeperre is a Specialized Solutions Architect at Red Hat focused on platform engineering, AI enablement, and sovereign platform design. He helps organizations build modern application platforms on OpenShift that reduce cognitive load, enable developer autonomy, and keep enterprises in control of their data, technology, and AI strategy.
With a strong background in software development and clean architecture, Maarten bridges application design and infrastructure, mapping architectural principles to scalable, secure, and compliant platforms. His work spans sovereign AI, internal developer platforms, model serving, and cloud-native integration patterns. He regularly speaks at developer and AI conferences across Europe, sharing practical insights on platform engineering, AI, and digital sovereignty.
Mais como este
The Open Accelerator joins the Google for Startups Cloud Program to empower the next generation of innovators
Unlock enterprise-ready, secure AI with Red Hat Lightspeed Agent for Google Cloud
Technically Speaking | Build a production-ready AI toolbox
Technically Speaking | Platform engineering for AI agents
Navegue por canal
Automação
Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes
Inteligência artificial
Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente
Nuvem híbrida aberta
Veja como construímos um futuro mais flexível com a nuvem híbrida
Segurança
Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias
Edge computing
Saiba quais são as atualizações nas plataformas que simplificam as operações na borda
Infraestrutura
Saiba o que há de mais recente na plataforma Linux empresarial líder mundial
Aplicações
Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações
Virtualização
O futuro da virtualização empresarial para suas cargas de trabalho on-premise ou na nuvem