Deploying Network Policies with CKS
Enforcing Pod-level network segmentation on CKS clusters.
This tutorial demonstrates how to implement basic network policies on CoreWeave Kubernetes Service (CKS) clusters to segment and secure Pod-to-Pod communication. You'll learn the rationale behind each step, CoreWeave-specific best practices, and how to validate your configuration.
Prerequisites
- CKS Cluster: You need access to a CoreWeave Kubernetes Service (CKS) cluster. CKS runs on bare-metal nodes with hardware isolation (NVIDIA BlueField-3 DPU) and leverages the Cilium CNI by default for high-performance, eBPF-powered policy enforcement.
- kubectl Access: Ensure kubectl is installed and configured for your cluster identity and namespace.
Create or utilize an existing namespace
Namespaces provide logical segmentation and isolation in Kubernetes. They are foundational for multi-tenancy and enforcing network policies scoped to individual teams or workloads. This step ensures your resources do not interfere with others and that network policies apply only within your segment.
$kubectl create ns <namespace-name>
You can substitute <namespace-name>
with a name relevant to your application, like demo-app
.
Deploy sample Pods
Deploy two simple Pods:
backend
: an NGINX server exposing port 80, labeledapp: backend
frontend
: a BusyBox Pod running sleep, labeledapp: frontend
These two Pods let us clearly demonstrate segmentation: by restricting which Pods can reach the backend, we exercise least privilege for service access.
$kubectl apply -n <your-namespace> -f - <<EOFapiVersion: v1kind: Podmetadata:name: backendlabels:app: backendspec:containers:- name: nginximage: nginxports:- containerPort: 80---apiVersion: v1kind: Podmetadata:name: frontendlabels:app: frontendspec:containers:- name: busyboximage: busyboxcommand: ["sleep", "3600"]EOF
Create a default deny policy for your namespace
By default, Pods in Kubernetes can communicate freely. In CoreWeave environments, this is mitigated by a defense-in-depth architecture (hardware isolation, Cilium default policies), but explicit Kubernetes network policies are still recommended for application-level segmentation.
This policy blocks all ingress and egress traffic to Pods in the namespace unless specifically permitted:
$kubectl apply -n cks -f - <<EOFapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: deny-allspec:podSelector: {}policyTypes:- Ingress- EgressEOF
This policy implements a "default deny" posture essential for microsegmentation and preventing lateral movement if a Pod is compromised. CoreWeave's network architecture offloads kernel-level filtering to Cilium using eBPF, so policy enforcement is highly efficient and performed close to the DPU hardware.
Create an allow policy for frontend to backend access
This policy allows only the frontend
Pod to access the backend
Pod on any port. No other Pod in the namespace, nor from outside, will be able to reach backend
.
$kubectl apply -n cks -f - <<EOFapiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata:name: allow-frontend-to-backendspec:podSelector:matchLabels:app: backendingress:- from:- podSelector:matchLabels:app: frontendEOF
This policy targets the backend Pod and allows ingress traffic only from Pods labeled app: frontend
within the same namespace. All other traffic remains denied, implementing the principle of least privilege where only specifically required connections are permitted.
Validating your network policies
Validation is crucial to ensuring your policies have the intended effect. Here's how to test and confirm enforcement:
-
Enter the frontend Pod and attempt to reach backend:
Example$kubectl exec -n <namespace-name> frontend -- sh -c "wget -qO- http://backend:80"You should receive the NGINX welcome message.
-
Deploy a third Pod to test isolation:
Example$kubectl run other --rm -i -t -n <namespace-name> --image=busybox --restart=Never -- shInside this shell, run:
Example$wget -qO- http://backend:80The connection should be refused or time out, demonstrating that only frontend has access.
-
Confirm logs and network policy enforcement
Check that your network policies are active and being enforced:
Example# Verify network policies are created and active$kubectl get networkpolicy -n <namespace-name># Check detailed policy status$kubectl describe networkpolicy deny-all -n <namespace-name>$kubectl describe networkpolicy allow-frontend-to-backend -n <namespace-name>You should see both policies listed as active, with the correct Pod selectors and rules configured.
To observe policy enforcement in action:
Example# Watch Cilium logs during your test connections$kubectl logs -n kube-system -l k8s-app=cilium --tail=50 -fWhen you run the connection tests from steps 1 and 2, you should see log entries showing allowed connections from frontend and dropped connections from unauthorized pods.
With your network policies in place and validated, you've implemented microsegmentation that uses CoreWeave's hardware-accelerated Cilium CNI for efficient policy enforcement at the DPU level. This provides application-layer security controls that complement the platform's built-in hardware isolation, with observability available through Cilium's metrics and logs. For deeper audit capabilities, CoreWeave supports tools like Cilium Tetragon for eBPF-based observability and Falco for runtime threat detection.