Skip to content
English
  • There are no suggestions because the search field is empty.

Restricting Akuity NetworkPolicies from 0.0.0.0/0 to Internal Private IP Ranges

How to Secure Akuity Agent and Platform NetworkPolicies while minimizing outage risk

Security scans may flag Akuity-related Kubernetes NetworkPolicy resources for allowing ingress or egress from 0.0.0.0/0. This typically triggers alerts such as deny-public-ingress with high severity, especially in regulated environments.

The alert commonly applies to the following NetworkPolicies in the akuity namespace:

  • akuity-agent-network-policy

  • argocd-application-controller-network-policy

  • argocd-applicationset-controller-network-policy

  • argocd-notifications-controller-network-policy

  • repo-server-network-policy

  • argocd-redis-ha-server-network-policy

  • argocd-redis-ha-proxy-network-policy

  • argocd-redis-custom-network-policy

These policies are designed to ensure full functionality of the Akuity Platform and Argo CD components under a wide variety of cluster and networking configurations.

Why Akuity Uses Broad NetworkPolicies

By default, Akuity ships permissive NetworkPolicies to:

  • Avoid unexpected connectivity issues

  • Support diverse cluster topologies

  • Reduce operational risk during installation and upgrades

  • Ensure agents can reliably communicate with the Akuity control plane and Argo CD services

Overly restrictive NetworkPolicies are a common cause of:

  • Agent disconnects

  • Applications stuck in Unknown or Progressing

  • Repo-server and Redis communication failures

  • Intermittent reconciliation errors

Recommended Approach (Best Practice)

Use Cluster-Level Firewall Controls

Akuity strongly recommends restricting traffic at the infrastructure or firewall level rather than modifying Kubernetes NetworkPolicies.

Examples:

  • Cloud provider firewall rules (AWS SGs, Azure NSGs, GCP VPC firewalls)

  • On-prem perimeter firewalls

  • Egress controls at the node or subnet level

Benefits

  • Easier to reason about and troubleshoot

  • Lower risk of breaking internal pod-to-pod communication

  • Clear separation of security and application logic


Modifying NetworkPolicies (If Required)

If your organization mandates Kubernetes-level restrictions, proceed with caution.

⚠️ Warning
Modifying Akuity NetworkPolicies incorrectly can cause partial or complete outages.

General Guidance

  • Always test in non-production first

  • Start with allowing more than you think you need

  • Tighten incrementally

  • Monitor pod logs, restarts, and agent connectivity closely

Where to Apply NetworkPolicy Changes

1. Agent-Side NetworkPolicies

For Akuity Agents installed in workload clusters:

  • Use Agent Kustomizations

  • Changes propagate automatically once saved

Typical workflow

  1. Define or patch NetworkPolicies via Kustomize

  2. Apply changes

  3. Verify agent tunnel stability and pod health


2. Control Plane (Akuity Platform / Argo CD)

For Argo CD and platform components:

  1. Modify the Helm values: 

    instanceValues:
    kustomization:
    ...
  2. Apply updated Helm values

  3. Confirm the platform-controller pod restarts

  4. Trigger a settings change (e.g., bump Argo CD version) to force Kustomization reapplication

References