CoreWeave
Search
K
Comment on page

Serverless

Deploy serverless applications on CoreWeave Cloud
CoreWeave Cloud enables clients to run their own code, manage data, and integrate applications - all without ever having to manage any infrastructure.
Deploying serverless applications is an especially ideal deployment alternative when the purpose of the application is to serve HTTP or gRPC requests either internally or externally to and from the Internet.

Knative on CoreWeave

CoreWeave uses the Knative runtime to support deploying serverless applications with a single manifest, so no additional installation or configuration is necessary to deploy your applications.

Serverless benefits

🔐
Automatic public HTTPS endpoints
Never worry about managing SSL certificates for your serverless applications - with Knative and LetsEncrypt, HTTPS endpoints are automatic with every deployment.
📈
Autoscaling by default, including Scale-to-Zero
High-availability comes built-in with serverless application deployments on CoreWeave, so application resources scale automatically according to their traffic. Scaling to zero means consuming no resources, incurring no billable charges while idle.
💰
No charge for public IPs
Public IP addresses do not incur any additional costs when deploying serverless applications on CoreWeave, making public distribution of the application easy.
🧪
Advanced deployment strategies
CoreWeave's implementation of the Knative runtime supports advanced deployment strategies, including traffic splitting techniques useful for blue/green and canary deployment methods.
Serveless deployment diagram
Serveless deployment diagram

Deployment example

The following example manifest demonstrates how to deploy a simple application manifest onto CoreWeave Cloud.
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld # The name of the app
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "0" # Allow scale to Zero
autoscaling.knative.dev/maxScale: "10" # Maximum number of Pods allowed to auto-scale to
spec:
# The container concurrency defines how many active requests are sent to a single
# backend pod at a time. This configuration is important as it affects how well requests
# are load balanced over Pods. For a standard, non-blocking web application, this can usually
# be rather high, ie 100. For GPU Inference however, this should usually be set to 1.
# GPU Inference only processes one request at a time, and one wants to avoid a queue being
# built up in the local pod instead of centrally in the Load Balancer.
containerConcurrency: 1
containers:
- image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
resources:
limits:
cpu: 2
memory: 4Gi
env:
- name: TARGET # The environment variable printed out by the sample app
value: "Go Sample v1"
Important
containerConcurrency defines how many active requests are sent to a single backend Pod at a time. This configuration is important, as it affects how well incoming requests are load balanced over Pods.
For a standard, non-blocking web application, this can usually be a high number, i.e. 100. For GPU Inference, however, this should usually be set to 1, as GPU Inference only processes one request at a time. Setting containerConcurrency to 1 avoids forming a queue in the local Pod, instead of centrally in the Load Balancer.

Ingress options

Note
Kourier is the default Knative serving ingress. No additional annotation is required to use Kourier, however the Kourier annotation may be added for the sake of explicitness.
CoreWeave supports two ingress options for Knative serving. The ingress can be specified by annotating the service using the networking.knative.dev/ingress-class annotation, with one of these values:
Ingress
Annotation value
Kourier (default)
kourier.ingress.networking.knative.dev
Istio
istio.ingress.networking.knative.dev
To use Istio, annotate the service with networking.knative.dev/ingress-class with istio.ingress.networking.knative.dev, as shown below.
apiVersion: serving.knative.dev/v1 # Current version of Knative
kind: Service
metadata:
name: helloworld # The name of the app
annotations:
networking.knative.dev/ingress-class: istio.ingress.networking.knative.dev # Use Istio
...
Important
The annotation must be added at creation, and can not be changed afterwards.

Service URL

Once the manifest is applied and the application is deployed, get the public URL of the Service using kubectl get ksvc:
$ kubectl get ksvc
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld https://helloworld.default.knative.chi.coreweave.com helloworld-ngzsn helloworld-ngzsn True
Note
  • If the URL provided does not use https, it may be that the domain is too long and therefore unable to acquire an SSL certificate. Domains must be at or under a total of 64 characters in order to successfully provision SSL. For further assistance, please contact your CoreWeave Support Specialist.
  • URL endings vary depending on the ingress used.
    • Kourier URLs end with .ord1.coreweave.cloud
    • Istio URLs end with .chi.coreweave.com

Monitoring

Managed Grafana monitoring provides insights into requests, success rates, response times and auto-scaling metrics transparently. No metrics-specific code needs to be added to the serverless application.
Screenshot: Grafana dashboard
Grafana dashboard
To access Grafana, log in to your CoreWeave Cloud account, then navigate to the Account Details section in the left-hand navigation menu, and click Grafana. Clicking this link will open a new window in your browser.
Screenshot: Grafana in the left-hand menu