Skip to main content

Exposing Applications

Expose your applications using Kubernetes Services

Kubernetes Workloads can be exposed to each other, but they can also be publicly exposed to the Internet using Kubernetes Services and Ingresses.

A Service allocates a dedicated IP address for the exposed application, whereas an Ingress works for HTTP-based protocols to alleviate the need for a separate IP at each endpoint.

Note

For stateless Web services, the Serverless framework may be a good option. In this framework, the application is automatically deployed with a TLS-enabled hostname and autoscaling enabled.

Internal Services

Internal, cluster-local Services should be configured as regular ClusterIP services.

Public Services

Making Services public to the Internet is done by deploying a LoadBalancer Service type with an annotation that allocates a public IP for the Service.

Note

Without a public IP annotation, a private static IP will be allocated. This is mostly useful for Services accessed from outside the cluster via a Site-to-Site VPN.

Depending on where your workloads are configured to run, public IP pools are accessible via the region location using the following corresponding tags:

RegionAddress Pool Label
ORD1public-ord1
LGA1public-lga1
LAS1public-las1

Example manifest

In the sshd-public-service.yaml example manifest, an SSHD LoadBalancer Service is deployed using the region annotation metallb.universe.tf/address-pool: public-ord1:

{6} title="sshd-public-service.yaml"
---
apiVersion: v1
kind: Service
metadata:
annotations:
metallb.universe.tf/address-pool: public-ord1
metallb.universe.tf/allow-shared-ip: default
name: sshd
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: sshd
port: 22
protocol: TCP
targetPort: sshd
selector:
app.kubernetes.io/name: sshd
Important

To ensure optimal traffic routing, ensure that your Workload is only scheduled to run in the same region from which a public IP is being requested. Use the region label affinity to limit scheduling of the Workload to a single region.

Attaching Service IPs directly to Pods

The traditional Kubernetes pattern dictates that one or many Pods with dynamic internal IP addresses be exposed behind a Service or Ingress which itself has a static IP address.

For certain use cases, such as where only one Pod is behind a Service, it might make sense to attach the Service IP directly to the Pod. A Pod would then have a static public IP as its Pod IP.

All connections originating from the Pod will show this IP as its source address, and this address will serve as its local IP.

Note

This is a non-standard approach for containers, and should be used only when the traditional Service/Pod pattern is not feasible.

Directly attaching the Service IP is beneficial in the following scenarios:

  • The application needs to expose a large number of ports (more than 10), so listing them out in the Service definition is impractical
  • The application needs to see the Service IP on the network interface inside the Pod
  • Connections originating from the Pod to the outside world need to originate from the Service IP
  • The application needs to receive all traffic, regardless of type and port
  • The Workload is a Virtual Machine type, where a static IP provides a more native experience
Important

An application that directly attaches to a Service IP can run with a maximum of 1 replicas, as there would otherwise be multiple Pods with the same Pod IP. Also, traffic to the Pod will not be filtered; all traffic inbound to the IP will be sent to the Pod. Network Policies can be used for additional security.

Example manifest

A stub Service needs to be created to allocate the IP. The Service should expose only port 1, as shown in the ip-direct-attached-to-pod.yaml example:

{11-12,16,18} title="ip-direct-attached-to-pod.yaml"
---
apiVersion: v1
kind: Service
metadata:
name: my-app
annotations:
metallb.universe.tf/address-pool: public-ord1
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 1 # Do not change this
targetPort: attach
protocol: TCP
name: attach
# Do not add any additional ports, it is not required for direct attach
selector:
coreweave.cloud/ignore: ignore # Do not change this

Then, to attach the IP from the Service directly to a Pod, annotate the Pod spec:

Example
annotations:
net.coreweave.cloud/attachLoadbalancerIP: my-app

Using Ingresses

Using an Ingress for HTTP-based applications saves IP address space and automatically provides a DNS name as well as a TLS certificate, enabling access to your application via https.

Ingresses and Ingress Controllers are managed by CoreWeave Cloud, where each data center region features its own separate Ingress Controller.

There are two main ways of using Ingresses on CoreWeave:

  • Deploy a standard Ingress, which provides a .coreweave.cloud domain with no additional setup required.
  • Deploy both a standard Ingress in addition to Traefik Ingress Controllers, which are required for optional routing customizations.
Note

Traefik Ingress Controllers may be installed from the Applications Catalog by searching for Traefik.

Using a standard Ingress

To target an Ingress, the manifest must specify:

  • a spec.rules.host value - for example,<my-app>.<namespace>.<region-label>.ingress.coreweave.cloud.
  • a <region>-traefik value to the ingressClassName key within the spec - for example, ingressClassName: ord1-traefik.

In the ingress-example.yaml example below, an Ingress called my-app is created in thetenant-test-default for the ORD1 data center region. The Ingress region is targeted using both the spec.rules.host value in tandem with the spec.ingressClassName value, both of which are required in the manifest in order to select the correct Ingress.

Note

A standard manifest should explicitly target the ingressClassName prepended by the matching data center region and appended with -traefik. For example, ord1-traefik.

Self-deployed Traefik Ingress Controllers via Applications Catalog will not recognize spec.IngressClassName, and must make use of thekubernetes.io/ingress.class annotation instead.

Click to expand - Standard Ingress in ORD1
{3,9,11,13}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
labels:
app.kubernetes.io/name: my-app
name: my-app
spec:
ingressClassName: ord1-traefik
rules:
- host: my-app.tenant-test-default.ord1.ingress.coreweave.cloud
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: my-app
port:
number: 80
tls:
- hosts:
- my-app.tenant-test-default.ord1.ingress.coreweave.cloud
secretName: my-app-tls # This secret is automatically created for you
Important

Currently, using metadata.annotations.kubernetes.io/ingress.class to target the Ingress Controller will still function, but it is a deprecated method. Manifests should be updated to use spec.ingressClassName: <region>-traefik instead, in addition to setting the spec.rules.host value.

If a manifest specifies only the spec.rules.host value without the corresponding spec.ingressClassName value, the target will default to the ORD1 (Chicago) data center region.

Using Traefik Custom Resources

Traefik Custom Resources may optionally be deployed to the cluster to customize routes to applications or to add redirects.

In the example below, a Traefik Custom Resource uses the Traefik HTTP Middleware, redirectScheme, to force all incoming requests to use HTTPS if they do not already.

{2,10,12}
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirect-secure
namespace: tenant-test-default
labels:
kubernetes.io/ingress.class: ord1-traefik
spec:
redirectScheme:
permanent: true
scheme: https

If a Traefik Custom Resource is deployed in the cluster, it must target the Ingress Controller using the label kubernetes.io/ingress.class: <region-label>-traefik.

The example below combines a standard Ingress deployment with the Traefik Middleware redirectScheme Middleware used to force incoming connections to use HTTPS.

Click to expand - Standard Ingress plus Traefik Middleware in ORD1
{4,8,12,14,29,30}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/router.middlewares: tenant-test-default-redirect-secure@kubernetescrd
labels:
app.kubernetes.io/name: my-app
name: my-app
spec:
ingressClassName: ord1-traefik
rules:
- host: my-app.tenant-test-default.ord1.ingress.coreweave.cloud
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: my-app
port:
number: 80
tls:
- hosts:
- my-app.tenant-test-default.ord1.ingress.coreweave.cloud
secretName: my-app-tls # This secret is automatically created for you
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirect-secure
namespace: tenant-test-default
labels:
kubernetes.io/ingress.class: ord1-traefik
spec:
redirectScheme:
permanent: true
scheme: https

Using External DNS

Kubernetes internal DNS provides Service discovery inside the cluster, allowing connections between Services and Pods without the use of IP addresses. Many applications will need to be reached both from inside the cluster as well as from the Internet.

CoreWeave provides external DNS out of the box for all types of applications. The given DNS name must be in the format of <your-choice>.<namespace>.coreweave.cloud.

Example manifest

The external-dns.yaml manifest provides an example of creating external DNS using the external-dns.alpha.kubernetes.io/hostname annotation:

{7}
---
apiVersion: v1
kind: Service
metadata:
annotations:
metallb.universe.tf/address-pool: public-ord1
external-dns.alpha.kubernetes.io/hostname: my-sshd.tenant-test-default.coreweave.cloud
name: sshd
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: sshd
port: 22
protocol: TCP
targetPort: sshd
selector:
app.kubernetes.io/name: sshd