Ingress-NGINX is retiring. Here's what that means for your Kubernetes cluster.

Category

Tech

Published

February 10, 2026

Ingress-NGINX reached end-of-life in March 2026. No more patches, no security fixes — and half of all Kubernetes teams still haven't migrated. Here's what the retirement means, who owns the problem, and how to approach the migration.

In March 2026, one of the most widely deployed components in the Kubernetes ecosystem reaches end-of-life. Ingress-NGINX — the community-maintained controller that handles inbound traffic for roughly half of all production Kubernetes clusters — will receive no more patches, no security fixes, and no further updates of any kind.

This is not a deprecation notice. It is not a soft landing. It is a hard stop.

If you are running Kubernetes in production and you have not already started planning your migration, this article is for you.

What is Ingress-NGINX, and why does it matter?

Ingress-NGINX is the controller that sits at the edge of your Kubernetes cluster and routes external HTTP and HTTPS traffic to the right services. For most teams, it is as foundational as the cluster itself — quietly doing its job while everything else is built on top of it.

That is also what makes this retirement so disruptive. This is not a peripheral dependency you can swap out on a quiet Tuesday afternoon. For organizations running business-critical applications on Kubernetes, the ingress layer is deeply embedded: tied to routing rules, TLS certificates, custom annotations, cert-manager integrations, and deployment pipelines that have accumulated over years.

The project became the de facto standard early in Kubernetes history, largely because it was flexible, cloud-agnostic, and well-documented. It ships as the default ingress controller on major platforms including RKE2, IBM Cloud, and Alibaba ACK. According to Datadog's telemetry data, approximately 50% of cloud-native environments rely on it today.

Why is it being retired?

The short answer: the project ran out of people willing to maintain it.

For years, Ingress-NGINX was sustained by one or two volunteers working evenings and weekends. Every CVE, every bug report, every feature request landed on a tiny team with no institutional backing. The Kubernetes community repeatedly asked vendors — whose products depended on Ingress-NGINX functioning correctly — to contribute maintainers. Those calls went unanswered.

The technical debt compounded the problem. Features like configuration snippets, once praised for their flexibility, turned out to be fundamentally difficult to make secure. The Ingress-NGINX architecture assumes that anyone who can create Ingress objects is a trusted cluster administrator — an assumption that does not hold in multi-tenant environments. Fixing this would require a ground-up redesign, not a patch.

The planned successor, InGate, never reached maturity and was also retired. There is no official replacement from the same community.

The Kubernetes Steering Committee put it plainly in their January 2026 statement: the flexibility that made Ingress-NGINX a boon has become a burden that cannot be resolved. Continuing to maintain it, even with additional resources, is no longer reasonable.

The security picture is already bad — and getting worse

Before we get to migration strategies, it is worth being direct about the risk of staying on Ingress-NGINX.

In March 2025, a set of five vulnerabilities collectively dubbed "IngressNightmare" was disclosed. The most critical, CVE-2025-1974, carried a CVSS score of 9.8 and enabled unauthenticated remote code execution — meaning an attacker with network access to the admission webhook could take full control of a Kubernetes cluster without any credentials. Wiz Research found over 6,500 clusters publicly exposed at the time of disclosure.

That was when the project still had maintainers and patches were being issued.

In February 2026 — just weeks before the retirement deadline — four new HIGH-severity CVEs were disclosed. They were patched. From March 2026 onward, they would not have been.

The Kubernetes Steering Committee is not using diplomatic language on this point: "Choosing to remain with Ingress-NGINX after its retirement leaves you and your users vulnerable to attack." Existing deployments will continue to work, which is precisely the danger — you may not know you are compromised until it is too late.

For organizations subject to SOC 2, PCI-DSS, ISO 27001, or HIPAA, there is an additional dimension: running EOL software in the L7 data path triggers automatic compliance findings. Security teams are already flagging this in audits, blocking production promotions, and in some cases delaying customer deployments.

Who owns this problem?

In most organizations, the migration lands on three groups.

Platform engineers and SREs get the operational problem. They own the ingress layer, they understand how deeply embedded it is, and they are the ones who have to execute a migration without causing downtime. They are also the ones who, if nothing changes, will be managing an unpatched, unsupported component sitting in the critical path of every inbound request to their cluster.

Security and compliance teams get it as an audit problem. EOL software in the traffic path is a finding that does not go away on its own. The longer it sits, the harder it becomes to explain to auditors, customers, and leadership.

Engineering leadership — VPs of Engineering, Heads of Infrastructure, CTOs at smaller organizations — gets it as a risk and prioritization problem. Migration takes real engineering time. It competes with product work. And unlike most infrastructure decisions, this one has a hard external deadline that does not care about your roadmap.

The challenge is that all three groups need to be aligned before anything actually moves. In our experience, the migrations that stall are not the ones that lack technical competence — they are the ones where no one has been given clear ownership of the decision.

What does migration actually involve?

This is where most articles stop at "consider Gateway API" and leave you to figure out the rest. We will try to be more concrete.

The first thing to understand is that there is no drop-in replacement for Ingress-NGINX. Every alternative requires some degree of rework. The question is how much, and the answer depends entirely on how you have been using it.

If your usage is relatively standard — basic routing rules, TLS termination, standard annotations — migration is straightforward in principle. Tools like ingress2gateway from kubernetes-sigs can automate much of the translation. Controllers like NGINX Gateway Fabric (F5), Envoy Gateway, Traefik, or Cilium all have migration guides and support for common Ingress patterns.

If your usage is complex — heavy reliance on configuration snippets, custom NGINX directives, multi-tenant routing, or deeply integrated cert-manager and external-dns setups — migration requires careful discovery work before you can even choose an alternative. One team we are aware of spent over 120 engineer-hours just handling annotation translation for their existing clusters before cutover. That is not unusual.

The migration path that consistently works looks like this:

Discover first. Audit every Ingress resource across every namespace and cluster. Map which annotations you are using, which are custom or snippet-based, and what the downstream dependencies are. Most teams are surprised by what they find.

Choose deliberately. The right replacement depends on your context. Teams that want minimal disruption and already know NGINX tend to go with NGINX Ingress Controller (F5) or NGINX Gateway Fabric. Teams that want to modernize fully tend to move to Gateway API implementations like Envoy Gateway or Cilium. Both are valid — but they are different decisions with different migration paths, and mixing them up wastes time.

Run in parallel. Deploy the new controller alongside the existing one using a different ingressClassName. Validate thoroughly before shifting traffic. Use blue/green or canary patterns where the stakes are high. Define explicit rollback triggers before you start.

Monitor aggressively after cutover. Connection handling, certificate renewal, long-lived connections, and edge cases in annotation translation have a habit of surfacing 48–72 hours after a migration, not immediately.

A note on managed Kubernetes

If you are running AKS, GKE, or another managed Kubernetes service, your situation may be slightly different. Some providers have extended support commitments for critical CVE patches into late 2026. Check your provider's SLA carefully before assuming you are immediately exposed — but also do not use it as a reason to delay planning. Extended patches cover specific disclosed vulnerabilities, not the broader risk of running unsupported software in perpetuity.

What we are seeing in the market

The honest picture is that most teams are behind.

After the retirement announcement in November 2025, a Reddit survey found that 44% of users were still running Ingress-NGINX with no migration plan in place. That number has likely moved since, but not enough. The Kubernetes Steering Committee issued a second, more urgent statement in January 2026 specifically because the community was not moving fast enough.

The teams that are handling this well share a few characteristics. They have assigned clear ownership — usually a platform or SRE lead who is accountable for the migration timeline. They ran discovery early and found the complexity before it found them. And they are treating it as a structured project with a defined scope, not as ad-hoc work to be done between other priorities.

The teams that are struggling tend to have the same problem: the migration is technically understood but organizationally stuck. Nobody has been given the mandate, the time, or the budget to own it.

This is, ultimately, a solvable problem. Ingress-NGINX migration is not more complex than other major infrastructure transitions — but it does require treating it like one.

The deadline has passed. The question now is how you respond to it.

Redeploy is a cloud, data, and AI partner built for enterprise complexity. If you are working through an Ingress-NGINX migration and want to talk through your options, reach out.

Written by