Exposing Applications
Expose your applications using Kubernetes Services
Kubernetes Workloads can be exposed to each other, but they can also be publicly exposed to the Internet using Kubernetes Services and Ingresses.
A Service allocates a dedicated IP address for the exposed application, whereas an Ingress works for HTTP-based protocols to alleviate the need for a separate IP at each endpoint.
For stateless Web services, the Serverless framework may be a good option. In this framework, the application is automatically deployed with a TLS-enabled hostname and autoscaling enabled.
Internal Services
Internal, cluster-local Services should be configured as regular ClusterIP
services.
Public Services
Making Services public to the Internet is done by deploying a LoadBalancer
Service type with an annotation that allocates a public IP for the Service.
Without a public IP annotation, a private static IP will be allocated. This is mostly useful for Services accessed from outside the cluster via a Site-to-Site VPN.
Depending on where your workloads are configured to run, public IP pools are accessible via the region location using the following corresponding tags:
Region | Address Pool Label |
---|---|
ORD1 | public-ord1 |
LGA1 | public-lga1 |
LAS1 | public-las1 |
Example manifest
In the sshd-public-service.yaml
example manifest, an SSHD LoadBalancer Service is deployed using the region annotation metallb.universe.tf/address-pool: public-ord1
:
---apiVersion: v1kind: Servicemetadata:annotations:metallb.universe.tf/address-pool: public-ord1metallb.universe.tf/allow-shared-ip: defaultname: sshdspec:type: LoadBalancerexternalTrafficPolicy: Localports:- name: sshdport: 22protocol: TCPtargetPort: sshdselector:app.kubernetes.io/name: sshd
To ensure optimal traffic routing, ensure that your Workload is only scheduled to run in the same region from which a public IP is being requested. Use the region label affinity to limit scheduling of the Workload to a single region.
Attaching Service IPs directly to Pods
The traditional Kubernetes pattern dictates that one or many Pods with dynamic internal IP addresses be exposed behind a Service or Ingress which itself has a static IP address.
For certain use cases, such as where only one Pod is behind a Service, it might make sense to attach the Service IP directly to the Pod. A Pod would then have a static public IP as its Pod IP.
All connections originating from the Pod will show this IP as its source address, and this address will serve as its local IP.
This is a non-standard approach for containers, and should be used only when the traditional Service/Pod pattern is not feasible.
Directly attaching the Service IP is beneficial in the following scenarios:
- The application needs to expose a large number of ports (more than 10), so listing them out in the Service definition is impractical
- The application needs to see the Service IP on the network interface inside the Pod
- Connections originating from the Pod to the outside world need to originate from the Service IP
- The application needs to receive all traffic, regardless of type and port
- The Workload is a Virtual Machine type, where a static IP provides a more native experience
An application that directly attaches to a Service IP can run with a maximum of 1
replicas, as there would otherwise be multiple Pods with the same Pod IP. Also, traffic to the Pod will not be filtered; all traffic inbound to the IP will be sent to the Pod. Network Policies can be used for additional security.
Example manifest
A stub Service needs to be created to allocate the IP. The Service should expose only port 1
, as shown in the ip-direct-attached-to-pod.yaml
example:
---apiVersion: v1kind: Servicemetadata:name: my-appannotations:metallb.universe.tf/address-pool: public-ord1spec:externalTrafficPolicy: Localtype: LoadBalancerports:- port: 1 # Do not change thistargetPort: attachprotocol: TCPname: attach# Do not add any additional ports, it is not required for direct attachselector:coreweave.cloud/ignore: ignore # Do not change this
Then, to attach the IP from the Service directly to a Pod, annotate the Pod spec:
annotations:net.coreweave.cloud/attachLoadbalancerIP: my-app
Using Ingresses
Using an Ingress for HTTP-based applications saves IP address space and automatically provides a DNS name as well as a TLS certificate, enabling access to your application via https
.
Ingresses and Ingress Controllers are managed by CoreWeave Cloud, where each data center region features its own separate Ingress Controller.
There are two main ways of using Ingresses on CoreWeave:
- Deploy a standard Ingress, which provides a
.coreweave.cloud
domain with no additional setup required. - Deploy both a standard Ingress in addition to Traefik Ingress Controllers, which are required for optional routing customizations.
Traefik Ingress Controllers may be installed from the Applications Catalog by searching for Traefik
.
Using a standard Ingress
To target an Ingress, the manifest must specify:
- a
spec.rules.host
value - for example,<my-app>.<namespace>.<region-label>.ingress.coreweave.cloud
. - a
<region>-traefik
value to theingressClassName
key within thespec
- for example,ingressClassName: ord1-traefik
.
In the ingress-example.yaml
example below, an Ingress called my-app
is created in thetenant-test-default
for the ORD1 data center region. The Ingress region is targeted using both the spec.rules.host
value in tandem with the spec.ingressClassName
value, both of which are required in the manifest in order to select the correct Ingress.
A standard manifest should explicitly target the ingressClassName
prepended by the matching data center region and appended with -traefik
. For example, ord1-traefik
.
Self-deployed Traefik Ingress Controllers via Applications Catalog will not recognize spec.IngressClassName
, and must make use of thekubernetes.io/ingress.class
annotation instead.
Click to expand - Standard Ingress in ORD1
---apiVersion: networking.k8s.io/v1kind: Ingressmetadata:annotations:cert-manager.io/cluster-issuer: letsencrypt-prodlabels:app.kubernetes.io/name: my-appname: my-appspec:ingressClassName: ord1-traefikrules:- host: my-app.tenant-test-default.ord1.ingress.coreweave.cloudhttp:paths:- path: /pathType: ImplementationSpecificbackend:service:name: my-appport:number: 80tls:- hosts:- my-app.tenant-test-default.ord1.ingress.coreweave.cloudsecretName: my-app-tls # This secret is automatically created for you
Currently, using metadata.annotations.kubernetes.io/ingress.class
to target the Ingress Controller will still function, but it is a deprecated method. Manifests should be updated to use spec.ingressClassName: <region>-traefik
instead, in addition to setting the spec.rules.host
value.
If a manifest specifies only the spec.rules.host
value without the corresponding spec.ingressClassName
value, the target will default to the ORD1 (Chicago) data center region.
Using Traefik Custom Resources
Traefik Custom Resources may optionally be deployed to the cluster to customize routes to applications or to add redirects.
In the example below, a Traefik Custom Resource uses the Traefik HTTP Middleware, redirectScheme
, to force all incoming requests to use HTTPS if they do not already.
---apiVersion: traefik.containo.us/v1alpha1kind: Middlewaremetadata:name: redirect-securenamespace: tenant-test-defaultlabels:kubernetes.io/ingress.class: ord1-traefikspec:redirectScheme:permanent: truescheme: https
If a Traefik Custom Resource is deployed in the cluster, it must target the Ingress Controller using the label kubernetes.io/ingress.class: <region-label>-traefik
.
The example below combines a standard Ingress deployment with the Traefik Middleware
redirectScheme
Middleware used to force incoming connections to use HTTPS.
Click to expand - Standard Ingress plus Traefik Middleware in ORD1
---apiVersion: networking.k8s.io/v1kind: Ingressmetadata:annotations:cert-manager.io/cluster-issuer: letsencrypt-prodtraefik.ingress.kubernetes.io/router.middlewares: tenant-test-default-redirect-secure@kubernetescrdlabels:app.kubernetes.io/name: my-appname: my-appspec:ingressClassName: ord1-traefikrules:- host: my-app.tenant-test-default.ord1.ingress.coreweave.cloudhttp:paths:- path: /pathType: ImplementationSpecificbackend:service:name: my-appport:number: 80tls:- hosts:- my-app.tenant-test-default.ord1.ingress.coreweave.cloudsecretName: my-app-tls # This secret is automatically created for you---apiVersion: traefik.containo.us/v1alpha1kind: Middlewaremetadata:name: redirect-securenamespace: tenant-test-defaultlabels:kubernetes.io/ingress.class: ord1-traefikspec:redirectScheme:permanent: truescheme: https
Using External DNS
Kubernetes internal DNS provides Service discovery inside the cluster, allowing connections between Services and Pods without the use of IP addresses. Many applications will need to be reached both from inside the cluster as well as from the Internet.
CoreWeave provides external DNS out of the box for all types of applications. The given DNS name must be in the format of <your-choice>.<namespace>.coreweave.cloud
.
Example manifest
The external-dns.yaml
manifest provides an example of creating external DNS using the external-dns.alpha.kubernetes.io/hostname
annotation:
---apiVersion: v1kind: Servicemetadata:annotations:metallb.universe.tf/address-pool: public-ord1external-dns.alpha.kubernetes.io/hostname: my-sshd.tenant-test-default.coreweave.cloudname: sshdspec:type: LoadBalancerexternalTrafficPolicy: Localports:- name: sshdport: 22protocol: TCPtargetPort: sshdselector:app.kubernetes.io/name: sshd