Documentation Index
Fetch the complete documentation index at: https://docs.coreweave.com/llms.txt
Use this file to discover all available pages before exploring further.
Profiles define which resources and capabilities a sandbox can request: container image, runtime class, compute shape, namespace, ingress and egress, and pod placement. They serve as guardrails that administrators use to enforce policies, such as “sandboxes cannot have internet egress” or “sandboxes can only request up to N CPUs and M GB of memory.”
Profiles are bound to runners. A binding attaches a profile to a runner so that the runner enforces that profile’s guardrails for every sandbox it places. A runner can have multiple profiles bound to it, and a profile can be bound to multiple runners. End users do not have to name a profile when they request a sandbox: the runner uses its default binding. When a request exceeds every bound profile, the gateway rejects it with CWSANDBOX_NO_SUITABLE_RUNNER (or CWSANDBOX_PROFILE_MISMATCH if a specific profile was requested) and reports which capabilities or resources caused the rejection, so the client knows whether to retry with a smaller request, ask the admin for a more permissive profile, or pick a different runner.
This page uses “profile” throughout. The control-plane API names the persisted resource ProfileTemplate and cwic sandbox profile exposes the same concept to end users: there is no separate “profile” object distinct from a “profile template.” When you see “template” in a CLI subcommand or schema field, it is the same thing as a profile.
For an end-to-end walkthrough that creates a profile and enables a runner, see Get started. For the field-by-field schema, see the Profile reference.
Before you begin
Profiles are created and managed with the CoreWeave Intelligent CLI (cwic). Install and authenticate it before running any of the commands on this page.
YAML examples on this page show only the most relevant spec fields for the section you are reading. A complete profile also needs a top-level display_name (and optionally description and labels); the Get started walkthrough and the Profile reference show a full example.
Profile commands
The cwic sandbox profile group manages profiles. Binding a profile to a runner is a runner operation, covered at the end of this page.
| Goal | Command |
|---|
| Create a profile (wizard) | cwic sandbox profile create |
| Create a profile from a file | cwic sandbox profile create -f profile.yaml |
| List profiles | cwic sandbox profile get (alias ls) |
| Inspect one profile | cwic sandbox profile get [PROFILE-ID] |
| Detailed view | cwic sandbox profile describe [PROFILE-ID] |
| Edit a profile | cwic sandbox profile edit [PROFILE-ID] |
| Delete a profile | cwic sandbox profile delete [PROFILE-ID] |
| Change runner bindings | cwic sandbox runner edit [RUNNER-ID] |
All input accepts YAML or JSON. The runner edit command is how you change which profiles a runner exposes; see Manage profile bindings below.
Create a profile
The wizard is the fastest path to a working profile. It collects display name, description, namespace strategy, and egress modes interactively, then opens your editor with a pre-filled spec so you can add ingress, runtime class, or pod overrides before submitting.
cwic sandbox profile create
The wizard prompts you for:
- Display name (required, unique within your organization).
- Description (optional).
- Namespace strategy, one of
per-user, per-profile, per-org, static. The wizard prompts for the namespace name when you pick static. See Choose a namespace strategy for what each option means.
- Egress modes, multi-select from
internet, user, org, profile, allowlist, none. For allowlist, the wizard asks for a mode name and CIDR list. See Configure egress for what each mode allows.
After you save and exit the editor, the wizard prints a summary and asks for confirmation before submitting.
Use a file instead
File mode accepts YAML or JSON and is the preferred path for repeatable setups and for agents that author profile specs:
cwic sandbox profile create -f profile.yaml
The CLI prints the new profile’s ID. Reference it when you bind the profile to a runner.
File mode also accepts YAML or JSON on stdin (cwic sandbox profile create -f -), which makes it a natural interface for agents like Claude Code or Codex that generate profile specs on your behalf.
A minimal profile that denies all outbound network egress and uses a per-user namespace:
display_name: default
description: Per-user sandboxes with no outbound network access
spec:
namespace:
strategy: per-user
network:
egress:
default: deny-all
modes:
deny-all:
type: none
The rest of this page covers the fields you’ll add to that spec for richer profiles.
Set the compute shape
These fields live directly under spec and shape the container and the node it runs on.
spec:
container_image: ghcr.io/myorg/python-ds:3.11-pytorch-2.3
resource_defaults:
cpuRequest: "1"
memoryRequest: 2Gi
cpuLimit: "4"
memoryLimit: 8Gi
instance_types: [h100]
node_selector:
gpu: h100
tags: [ml, production]
| Field | Purpose |
|---|
container_image | Image reference for sandbox pods. Leave unset to inherit the runner’s zone default. |
runtime_class | Kubernetes runtimeClassName. Selects the isolation backend. See Pick a runtime class. |
resource_defaults | Default CPU and memory requests and limits. Sandboxes can request different values at launch, but the request is rejected if it exceeds the organization’s per-sandbox resource quota. |
instance_types | Restrict sandboxes to specific CoreWeave instance classes (for example [h100]). Expands to a node-affinity constraint. |
node_selector | Kubernetes nodeSelector labels applied to every sandbox pod. Use for placement beyond instance_types. |
tags | Free-form strings for organization. Tags are descriptive; they carry no authorization semantics today. |
Use instance_types for portable intent (this profile wants H100s) and node_selector for cluster-specific labels you control.
Pick a runtime class
runtime_class is a free-form string forwarded as Kubernetes runtimeClassName on the sandbox pod. CoreWeave does not ship a curated list of supported values: you can use any runtime class that you have installed and registered on the target CKS cluster.
Leave runtime_class unset to use the node default (typically runc), which is appropriate for trusted workloads inside your own organization. If you need stronger isolation, install the runtime of your choice on the cluster, define a RuntimeClass object that references it, and set runtime_class in the profile to that object’s name.
A profile that references a runtime class the cluster lacks fails to schedule sandboxes; the runner surfaces the error.
Choose a namespace strategy
spec.namespace controls which Kubernetes namespace each sandbox lands in. The strategy you pick determines how sandboxes are grouped for organizational boundaries that Kubernetes scopes by namespace: resource quotas, NetworkPolicies, RBAC, secrets, service discovery, and per-namespace cost allocation. A finer-grained strategy gives you tighter blast radius for those mechanisms and easier per-user or per-profile accounting.
Namespaces are an organizational boundary, not a security one. Two pods in different namespaces still share the node kernel, so for untrusted workloads (for example, model-generated commands or third-party code) rely on runtime_class (see Pick a runtime class) and spec.network for the actual isolation, and use the namespace strategy on top to keep accounting, quotas, and policy clean.
spec:
namespace:
strategy: per-user
namespacePrefix: sb-
autoCreate: true
labels:
team: ml
| Strategy | One namespace per | Examples of when this fits |
|---|
per-user | (organization, user) | Multi-tenant agent fleets where every user gets their own quota, secrets, and NetworkPolicy scope. The most common starting point. |
per-org | Organization | Single-tenant environments where every sandbox shares a trust domain and you want one quota for the whole org. |
per-profile | Profile | When a specific profile (for example, an untrusted-code profile) needs its own NetworkPolicies, quotas, or audit boundary separate from the rest of the cluster. |
static | Cluster-wide (all sandboxes share staticNamespace) | Shared environments, or namespaces you provision and govern out-of-band with autoCreate: false. |
Additional fields:
namespacePrefix is prepended to auto-generated names. Keep it short; namespaces have a 63-character limit.
staticNamespace is required when strategy: static. Must be a valid DNS-1123 label.
autoCreate defaults to true. Set it to false when you provision namespaces out-of-band.
labels and annotations apply to auto-created namespaces. Useful for policy engines and cost allocation.
spec.network.egress controls outbound traffic. You declare named modes and pick one as the default. Sandboxes inherit the default unless they opt into a different mode at launch.
If you omit spec.network entirely, sandboxes default to no outbound connectivity.
The “Examples” column below is illustrative, not prescriptive: a mode is appropriate any time its reachability semantics match what your workload needs.
| Type | Allows outbound to | Examples |
|---|
internet | Anywhere on the public internet. | Agent workloads that fetch from PyPI, npm, GitHub. |
allowlist | Only the CIDR ranges in cidrs. | Continuous integration runners with a fixed set of upstreams. |
org | Other sandboxes in the same organization. | Multi-sandbox workflows within one tenant. |
user | Other sandboxes owned by the same user. | Per-user agent fleets. |
profile | Other sandboxes using the same profile. | Tightly-coupled workers sharing a profile. |
none | Nothing. | Untrusted code execution. |
Allowlist a fixed set of upstreams
A test runner that only needs GitHub and a public CDN:
spec:
network:
egress:
default: ci-allowlist
modes:
ci-allowlist:
type: allowlist
cidrs:
- 140.82.112.0/20
- 151.101.0.0/16
Offer a split posture
Expose two modes so most sandboxes get internet while specific jobs opt into none:
spec:
network:
egress:
default: internet
modes:
internet:
type: internet
deny-all:
type: none
Sandboxes launched with egress: deny-all get no outbound; everything else inherits the default.
Expose sandbox ports with ingress
Under spec.network.ingress, you define one or more named exposure levels. Each level is a reachability tier (for example, internal for sandboxes inside the organization or public for endpoints reachable from the open internet) that you configure with a scope and a Kubernetes Service strategy. When sandbox users expose a port, they pick from these names.
The example below defines two levels, internal and public:
spec:
network:
ingress:
internal:
scope: org
service:
serviceType: ClusterIP
public:
scope: any
expectsExternalAddress: true
service:
serviceType: LoadBalancer
ingress:
controllerName: nginx
template: "{{.SandboxID}}.sandboxes.example.com"
Scope
| Scope | Reachable from |
|---|
user | The same user’s other sandboxes. |
profile | Sandboxes sharing the same profile. |
org | Anything inside the organization. |
any | No restriction. Use for truly public endpoints. |
Service configuration
Every exposure level creates a Kubernetes Service in front of the sandbox pod. service.serviceType picks which Service type the runner creates: ClusterIP for in-cluster reachability, NodePort to reach the sandbox through any cluster node’s IP on an allocated port, or LoadBalancer for an externally provisioned address.
Two optional annotation fields plug into cluster add-ons:
service.serviceSelectorAnnotation pins which address pool allocates the external address (for example, MetalLB).
service.dnsAnnotation lets a DNS controller (for example, ExternalDNS) provision a record for the sandbox.
Both take {key, value} pairs.
Ingress resource
When the exposure level needs an HTTP entry point in addition to a Service (for example, a public hostname routed by your cluster’s ingress controller), set the ingress block. The runner creates a Kubernetes Ingress resource alongside the Service:
controllerName matches your cluster’s ingress controller (for example, nginx or istio) and selects which controller programs the route.
template is a Go-template string that produces the hostname for each sandbox. Available variables are {{.SandboxID}} and {{.Namespace}}.
expectsExternalAddress
Set to true when the exposure needs a provisioned external IP before the sandbox is considered ready. The control plane waits for the LoadBalancer or ingress controller to assign an address before marking the sandbox healthy.
Fine-grained pod control
The fields above cover most profiles. When you need direct control over the underlying Kubernetes pod, spec.pod exposes a partial PodSpec plus pod metadata. Use it for fields the structured profile shape does not model: volumes, init containers, security context, tolerations, scheduler-specific annotations, and similar pod plumbing.
spec:
pod:
metadata:
labels:
app: agent
annotations:
owner: ml-platform
spec:
volumes:
- name: scratch
emptyDir: {}
initContainers:
- name: prep
image: busybox
command: ["sh", "-c", "echo prepping"]
securityContext:
runAsNonRoot: true
placement:
instanceTypes: [h100]
The runner filters out reserved labels and annotations (such as sandbox-id) that it manages itself.
If you only need to pin instance types, prefer the top-level spec.instance_types. Keep spec.pod for the cases that actually need PodSpec depth.
Example: schedule sandboxes through the SUNK Pod Scheduler
On a cluster that runs CoreWeave SUNK, you can use spec.pod to route sandbox pods through the SUNK Pod Scheduler so they run on nodes managed by SUNK, and can even share nodes with Slurm jobs. The profile sets schedulerName and the SUNK annotations on the pod; sandboxes launched against the profile then flow through Slurm for placement.
See SUNK Pod Scheduler integration for an end-to-end walkthrough including the profile YAML, the required terminationGracePeriodSeconds value, and how to drive placement with annotations such as sunk.coreweave.com/partition. For the underlying scheduler behavior and annotation contract, see the SUNK Pod Scheduler reference.
Manage profile bindings
A binding attaches a profile to a runner so the runner enforces that profile’s policies and guardrails for every sandbox it places. Bindings live inside the runner object, not as standalone resources. Update bindings with cwic sandbox runner edit:
cwic sandbox runner edit [RUNNER-ID] -f bindings.yaml
The patch supplies the desired profile_bindings list. The control plane replaces the entire list in one transaction. Any binding you omit from the patch is detached from the runner, so the patch must include every binding you want to keep, not just the ones you are adding or changing. Matching is by profile_template_id, the on-wire field name for the profile’s ID.
profile_bindings:
- profile_template_id: "[CI-PROFILE-ID]"
profile_name: ci
is_default: true
- profile_template_id: "[GPU-PROFILE-ID]"
profile_name: gpu
- profile_template_id: "[UNTRUSTED-PROFILE-ID]"
profile_name: untrusted
Each binding carries:
profile_template_id, the ID of the profile being referenced.
profile_name, an optional alias sandboxes use to pick this profile at launch. Falls back to the profile’s display name when omitted.
is_default, whether this binding is the fallback when a sandbox does not specify a profile. Exactly one binding per runner must be true.
The runner’s wizard mode (cwic sandbox runner edit [RUNNER-ID] with no -f) prompts for binding changes interactively if you’d rather not author the patch by hand or through an agent.
Update an existing profile
cwic sandbox profile edit opens the current spec in your editor, then submits the diff:
cwic sandbox profile edit [PROFILE-ID]
A few behaviors worth knowing before you change a profile that real traffic depends on:
- Updates land immediately and fleet-wide. Every runner that binds the profile sees the new spec on the next binding resolution. There is no staged rollout.
- Running sandboxes keep their original spec. A sandbox uses the spec it was launched with; new sandboxes pick up the new spec.
- Deletes are blocked while bindings exist. Detach the profile from every runner (with
cwic sandbox runner edit) before cwic sandbox profile delete.
For a safer rollout, version the display name (agent-v1, agent-v2) and migrate runners one at a time by editing each runner’s bindings. When agent-v1 has no remaining bindings, delete it.
Delete a profile
Delete a profile with cwic sandbox profile delete. The argument can be a profile ID or its short display name:
cwic sandbox profile delete [PROFILE-ID]
The CLI prompts for confirmation. Pass --yes to skip the prompt for scripted use, and pass multiple IDs to delete several profiles in one command:
cwic sandbox profile delete team-default beta-template --yes
Profiles that are still bound to a runner are reported and skipped. Detach the profile from every runner with cwic sandbox runner edit before retrying the delete.
Deletion is permanent. Running sandboxes that were launched against the profile continue to run with their original spec, but no new sandboxes can be launched against a deleted profile.
Next steps
- The Profile reference documents every field with types, defaults, and validation rules.
- Get started walks through enabling a runner end-to-end.