Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.coreweave.com/llms.txt

Use this file to discover all available pages before exploring further.

On CKS, Storage Volumes are created as Persistent Volumes (PVs), by deploying Persistent Volume Claims (PVCs). Specifying the type, size, and placement of Storage Volumes is accomplished by configuring the manifest for the Persistent Volume Claim.

Persistent Volume Claims

Persistent Volume Claims are requests for Persistent Volume resources, which are used as storage volumes for workloads that require persistent storage for workloads.

Volume attachments

When Persistent Volumes are created, the PV is bound to a StorageClass, defined in the PV’s manifest. This gives Kubernetes information about how to set up the storage volume, such as the CSI plugin and any options that the CSI plugin might need. After the PV has been created for the given StorageClass, Kubernetes schedules a Pod on a Node. Kubernetes then takes a hash of the Node name, PV name, and CSI plugin name, in order to create a Volume Attachment with its name as this hash.
VolumeAttachment objects don’t belong to a namespace and can be accessed in the Cloud Console.

Create PVCs

You can create PVCs using the Cloud Console or by deploying a PVC manifest using kubectl.

Create a PVC using the API

Create a Persistent Volume Claim by annotating the following YAML manifest to your specifications:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: new-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: shared-vast
Update these fields in the manifest:
Field nameField typeDescription
namestringThe name of the PVC.
namespacestringThe namespace for the PVC.
spec.accessModeslistSets the access mode for the volume. Distributed Filesystem Volumes use ReadWriteMany.
resources.requests.storagestringDetermines the size of the volume, in Gi
storageClassNamestringThe storage class name is shared-vast for CKS.
Configure the manifest as desired, save it to a .yaml file, and apply it with kubectl. Replace [PVC-FILE] with the path to your manifest file.
kubectl apply -f [PVC-FILE]
To verify that the PVC is bound to a PV, replace [PVC-NAME] with the name you set in the manifest:
kubectl get pvc [PVC-NAME]
Example output
NAME      STATUS   VOLUME                                 CAPACITY   ACCESS MODES   STORAGECLASS   AGE
new-pvc   Bound    pvc-b657b567-e-6b78-98347-67894349850  512Gi      RWO            shared-vast    1d
To get more detailed information about a PVC:
kubectl describe pvc [PVC-NAME]

Create a PVC with the Cloud Console

You need access to an active CKS cluster to create a PVC using the Cloud Console. If you do not have any clusters or do not have access to any clusters, you will see the following message: Message if clusters cannot be loaded If an active cluster is available, choose Create PVC to begin. Create PVC Cloud Console page Enter details for the PVC, then click Create. After the PVC is created, basic information about the PVC, including its name, age, size, and status is shown. Persistent Volume Claims in Console UI

Managing PVCs

For creation, both API and Console approaches are common. For ongoing management tasks, the Console provides easier visualization and safety checks, so this section focuses on Console methods below.

Working with PVCs

The groups you belong to dictate how you interact with PVC resources. As a non-admin, you can add PVCs through the UI or using the API, but your ability to access and edit them is limited to your permitted actions within your clusters. Administrators have comparatively greater powers over PVC resources. Users with admin, write, and read Cluster RoleBindings can add and remove PVCs from the groups they are in. In addition to this, the default admin privileges allows them to manage those PVCs collectively. They can also use the following Console Actions inside their cluster:
  • Edit RoleBindings.
  • Create and modify RBAC schemes which allow a chosen group or groups access to resources. These can be as general as a resource type, or as specific as a particular PVC.

Editing volumes

To edit a PVC, click the three-dot menu on the far right of the PVCs listing on the Overview page, then click Edit. Enter the desired changes in the pop-up modal. Edit PVC modal
Shrinking volumesPVCs can’t be reduced in size. To use a smaller volume, you’ll need to create a new, smaller PVC and migrate your data manually.

Cloning PVCs

You can clone PVCs if you have write permissions. Click the Clone button from the PVC’s three-dot menu, and then enter a new name for the cloned PVC. Click Confirm to provision an exact copy of the PVC in the user’s default namespace. Clone PVC modal

Binding a PVC to a new namespace

Rebinding a PVC makes an existing persistent volume’s storage available in a different namespace. This is useful when you need to migrate workloads or share storage across namespaces while preserving your data. CoreWeave provides the rebind-pvc.sh utility script to automate this process. The script generates the necessary Kubernetes manifests to bind an existing PVC’s underlying storage volume to a new namespace. For complete instructions, including installation, usage examples, and important teardown procedures, see the rebind-pvc.sh documentation on GitHub.
Important: Proper deletion orderWhen deleting rebound PVCs, always delete the child PVCs (in target namespaces) before deleting the original base PVC. CoreWeave’s Persistent Volume Management Operator (PVMO) deletes the underlying storage when the base PVC is deleted, which can cause child PVCs to have stale volume handles and prevent pods from shutting down gracefully.See the proper teardown procedure for details.

Delete a storage volume via the Cloud Console

Only users with admin permissions can delete Storage Volumes.
From the three-dot menu, select Delete. Delete PVC menu When a PVC is deleted, any PV linked to it is automatically soft-deleted. Soft-deleted PVs are deleted permanently after the soft delete window (24hrs after the original PVC deletion) expires. To delete a PVC, type the name of the desired volume in the confirmation modal, then click Delete. Confirm deletion modal The volume is removed from the PVC overview menu when deletion is complete.

Performance optimization

Avoid subPath mounts when possible

We recommend mounting Distributed File Storage volumes onto your Pods without using a Kubernetes subPath. Mounting with subPath exposes only a subdirectory of the volume to the Pod, making certain Distributed File Storage features unavailable from inside the mount. The most notable example is the .vast_trash directory, which enables asynchronous, non-conflicting bulk deletes. For more information about how subPath affects deletion, see Delete Files with VAST trash. If you have a use case that requires subPath mounts, contact CoreWeave Support for help.

Git performance

Git performance with Distributed File Storage can sometimes be slower than desired. Here are some best practices to enhance its performance on Distributed File Storage.

Enable parallel checkout

Enable parallel checkout by setting a desired number of parallel workers in your Git config (checkout.workers). The recommended number for working with Distributed File Storage is 16.
git config --global checkout.workers 16

Enable untracked cache

Enable Git’s untracked cache.
git config --global core.untrackedCache true

Define record sizes for TAR performance

When using TAR (tar) for compression, it may help performance to set the record size to 1M:
tar --record-size=1M -cf archive.tar foo
Setting the record size can also make rsync more efficient. See the man page for tar for more information.
Last modified on April 29, 2026