Skip to main content

Object Storage

Use CoreWeave's S3-compatible Object Storage for flexible and efficient data storage

CoreWeave Object Storage is an S3-compatible storage system that allows data to be used in a flexible and efficient way, with features that include:

Get started

Object Storage is easily deployed and configured with the Cloud UI. Advanced users that need more control, encryption, or fine-grained bucket policies should deploy with Kubernetes.

Using the Cloud UI

Use the Object Storage section of the Cloud UI to generate tokens and s3cmd configuration files. A token is a key-pair used by utilities such as s3cmd, s5cmd, rclone, Boto3, and more.

To create one, click Create a New Token. This brings up the Create Token modal, which prompts you to assign a name, default region (which can be changed later), and an access level.

If using s3cmd, select Automatically download s3cmd config.

Once created, the Access and Secret keys are available on the Object Storage page. Click the icon to the right of each key to copy it to the clipboard.

The download icon on the far right generates a new s3cmd configuration, with an opportunity to set the default region.

Please see our s3cmd documentation to learn how to use the config file.

About access levels

When creating the token, choose the desired access level: Read, Write, Read/Write, and Full. To learn more about the capabilities of each level, please see Identity and Access Management (IAM) and access levels below.

Any access level may be chosen from this drop-down

From the Object Storage page, the Access Level field displays the key's current access level.

Access levels are displayed on the Object Storage page

Using Custom Resource Definitions (CRDs)

CoreWeave provides Kubernetes Custom Resource Definitions (CRDs) for programmatic and automated methods of generating access to Object Storage clusters.

In most cases, a single user can be used. If separate credentials per user are needed, generate them by deploying a user CRD. More granular permissions can be created with bucket policies.

The user CRD

Deploying a user CRD creates a user with access the Object Storage clusters. Each user has an access key and a secret key, which are stored in a Kubernetes secret in the namespace.

The Secret's name can be controlled with spec.secretName as shown below. If not specified, the secret is named <namespace>-<metadata.name>-obj-store-creds. The Secret is associated with the user, and deleted when the user is deleted.

Tip

As a best practice, do not share a secretName between multiple users. Because Secrets are associated with users, they are automatically deleted when the user is deleted, disrupting other users with the same Secret name. Unless setting the secretName is important for the use-case, consider leaving it blank to generate the default name.

After creating the user CRD, it can be viewed and deleted with kubectl. It can also be viewed and deleted in the Cloud UI as described above because these methods are compatible and work on the same resource.

Create the user CRD

This CRD creates my-example user with full permissions and my-example-secret.

title="object-storage-crd.yaml"
apiVersion: objectstorage.coreweave.com/v1alpha1
kind: User
metadata:
name: my-example
namespace: my-namespace
spec:
owner: my-namespace
access: full # Possible options are: full, readwrite, read, or write
secretName: my-example-secret

To create the user with kubectl:

Example
$
kubectl apply -f object-storage-crd.yaml
user.objectstorage.coreweave.com/my-example created

View the user CRD and Secret

To view the user:

Example
$
kubectl get users.objectstorage.coreweave.com
NAME OWNER ACCESS OBJECT STORAGE ACCESS OBJECT STORAGE ID
my-example my-namespace full ACTIVE my-namespace:my-example

To view my-example-secret:

Example
$
kubectl get secrets
NAME TYPE DATA AGE
my-example-secret Opaque 2 14s

To view access key and secret key in my-example-secret, extract and base64 decode them. These are used for tools like rclone and s3cmd:

Example
$
kubectl get secrets my-example-secret \
-o jsonpath='{.data.accessKey}' | base64 --decode
EXAMPLEKEYDALCHZ
$
kubectl get secrets my-example-secret \
-o jsonpath='{.data.secretKey}' | base64 --decode
EXAMPLEKEYueqR0hf5nbxJJRSD9gqVYqAou1Qgp8J2Z

Delete the user

To delete the user with kubectl:

Example
$
kubectl delete users.objectstorage.coreweave.com my-example
user.objectstorage.coreweave.com "my-example" deleted

The corresponding Secret is deleted automatically.

Object Storage endpoints

When retrieving files with HTTPS, use the endpoint for the region where the bucket is located.

RegionEndpoint
New York - LGA1https://object.lga1.coreweave.com/
Chicago - ORD1https://object.ord1.coreweave.com/
Las Vegas - LAS1https://object.las1.coreweave.com/

Each endpoint represents an independent, exclusive object store. This means that objects stored in ORD1 buckets are not accessible from the LAS1 endpoint, and so on.

Bucket names must be unique per region. It is a recommended practice to include the region name (such as ord1) in the name of the bucket.

Users may use any regional Object Storage endpoint and create and use buckets as they wish, but each region comes with its own quota limit. The default quota limit is 30TB of data per region.

Note

Should you require an increase in your quota limit, please contact support.

Accelerated Object Storage

CoreWeave also offers Accelerated Object Storage, a series of Anycasted NVMe-backed storage caches that provide blazing fast download speeds. Accelerated Object Storage is best suited for frequently accessed data that doesn't change often, such as model weights and training data.

One of the biggest advantages of Anycasted Object Storage Caches is that data can be pulled from across data center regions, then be cached in the data center in which your workloads are located.

For example, if your models are hosted in ORD1 (Chicago), but have a deployment scale to all regions (ORD1, LAS1, LGA1), your call to https://accel-object.ord1.coreweave.com will be routed to a cache located closest to the workload - that is to say, if you are calling from LGA1, it will hit the cache in LGA1; if you are calling from LAS1, it will hit the cache in LAS1. This drastically reduces spin up times for workloads where scaling is a concern.

Note

When using Accelerated Object Storage, there's no need to change the endpoint for every region your application is deployed in - this is the beauty of it!

Caution

Accelerated endpoints should only be used to get objects.

Do not use accelerated endpoints for any operation that lists, puts, manipulates, updates, or otherwise changes objects.

Learn more

Please note that CoreWeave strongly recommends users choose the standard S3 endpoints for third party interfaces and utilities using S3 protocols. If users point interfaces and utilities to the accelerated endpoints, they may not work as expected or could even break entirely.

Use of CoreWeave's Accelerated Object Storage is available at no additional cost. To use Accelerated Object Storage, simply modify your Object Storage endpoint to one of the addresses that corresponds to your Data Center region.

RegionEndpoint
Las Vegas - LAS1accel-object.las1.coreweave.com
New York - LGA1accel-object.lga1.coreweave.com
Chicago - ORD1accel-object.ord1.coreweave.com

Server Side Encryption

Note

Server Side Encryption is implemented according to AWS SSE-C standards.

CoreWeave supports Server Side Encryption via customer-provided encryption keys. The client passes an encryption key along with each request to read or write encrypted data. No modifications to your bucket need to be made to enable Server Side Encryption (SSE-C); simply specify the required encryption headers in your requests.

Important

It is the client's responsibility to manage all keys, and to remember which key is used to encrypt each object.

SSE with customer-provided keys (SSE-C)

The following headers are utilized to specify SSE-C customizations.

NameDescription
x-amz-server-side-encryption-customer-algorithmUse this header to specify the encryption algorithm. The header value must be AES256.
x-amz-server-side​-encryption​-customer-keyUse this header to provide the 256-bit, base64-encoded encryption key to encrypt or decrypt your data.
x-amz-server-side​-encryption​-customer-key-MD5Use this header to provide the base64-encoded, 128-bit MD5 digest of the encryption key according to RFC 1321. This header is used for a message integrity check to ensure that the encryption key was transmitted without error or interference.

Server Side Encryption example

The following example demonstrates using an S3 tool to configure Server Side Encryption for Object Storage.

Note

Because SSE with static keys is not supported by s3cmdat this time, the AWS CLI tool is used for this example. For a full explanation of the parameters used with the s3 tool in this example, review the AWS CLI s3 documentation.

First, run aws configure to set up access and to configure your Secret Keys.

Example
$
aws configure

Separately, generate a key using your preferred method. In this case, we use OpenSSL to print a new key to the file sse.key.

Example
$
openssl rand 32 > sse.key
Important

The generated key must be 32 bytes in length.

Once the process of aws configure is complete and your new key has been configured for use, run the following s3 commands to upload a file with Server Side Encryption.

Example
$
aws s3 --endpoint-url=https://object.las1.coreweave.com \
cp your-file.txt s3://your-bucket/your-file.txt \
--sse-c-key=fileb://sse.key \
--sse-c AES256

Finally, to retrieve the file, pass the path of the encryption key used (sse-customer-key) to aws s3 to decrypt the file:

Example
$
aws s3 --endpoint-url=https://object.las1.coreweave.com \
cp s3://your-bucket/your-file.txt your-file.txt \
--sse-c-key=fileb://sse.key \
--sse-c AES256

Identity and Access Management (IAM) and access levels

When an initial key pair is created for Object Storage access, that key pair is given the permissions specified on creation in order to read, write, and modify policies of the buckets which it owns. Each key pair is considered an individual user for access, and can be used to provide granular access to applications or users.

Permission levels that may be granted are:

Permission levelCRD keyDescription
ReadreadGives access to only read from buckets you own and have created
WritewriteGives access to only write to buckets you own and have created
Read/WritereadwriteGrants access to both read and write to buckets you own and have created
FullfullGrant Write/Read access, as well as admin access to create buckets and apply policies to buckets

IAM actions

Currently, CoreWeave Cloud supports the following IAM bucket policy actions:

Click to expand - Supported IAM Actions
  • s3:AbortMultipartUpload
  • s3:CreateBucket
  • s3:DeleteBucketPolicy
  • s3:DeleteBucket
  • s3:DeleteBucketWebsite
  • s3:DeleteObject
  • s3:DeleteObjectVersion
  • s3:DeleteReplicationConfiguration
  • s3:GetAccelerateConfiguration
  • s3:GetBucketACL
  • s3:GetBucketCORS
  • s3:GetBucketLocation
  • s3:GetBucketLogging
  • s3:GetBucketNotification
  • s3:GetBucketPolicy
  • s3:GetBucketRequestPayment
  • s3:GetBucketTagging
  • s3:GetBucketVersioning
  • s3:GetBucketWebsite
  • s3:GetLifecycleConfiguration
  • s3:GetObjectAcl
  • s3:GetObject
  • s3:GetObjectTorrent
  • s3:GetObjectVersionAcl
  • s3:GetObjectVersion
  • s3:GetObjectVersionTorrent
  • s3:GetReplicationConfiguration
  • s3:ListAllMyBuckets
  • s3:ListBucketMultipartUploads
  • s3:ListBucket
  • s3:ListBucketVersions
  • s3:ListMultipartUploadParts
  • s3:PutAccelerateConfiguration
  • s3:PutBucketAcl
  • s3:PutBucketCORS
  • s3:PutBucketLogging
  • s3:PutBucketNotification
  • s3:PutBucketPolicy
  • s3:PutBucketRequestPayment
  • s3:PutBucketTagging
  • s3:PutBucketVersioning
  • s3:PutBucketWebsite
  • s3:PutLifecycleConfiguration
  • s3:PutObjectAcl
  • s3:PutObject
  • s3:PutObjectVersionAcl
  • s3:PutReplicationConfiguration
  • s3:RestoreObject
Important

CoreWeave Cloud does not yet support setting policies on users, groups, or roles. Currently, account owners need to grant access directly to individual users. Granting an account access to a bucket grants access to all users in that account.

For all requests, the condition keys CoreWeave currently supports are:

  • aws:CurrentTime
  • aws:EpochTime
  • aws:PrincipalType
  • aws:Referer
  • aws:SecureTransport
  • aws:SourceIpaws:UserAgent
  • aws:username

Certain S3 condition keys for bucket and object requests are also supported. In the following tables, <perm> may be replaced with

  • read
  • write/read-acp
  • or write-acp/full-control

for read, write/read, or full control access, respectively.

Supported S3 Bucket Operations

PermissionCondition Keys
s3:createBuckets3:x-amz-acl, s3:x-amz-grant-<perm>
s3:ListBuckets3:<prefix>
s3:ListBucketVersionsN/A
s3:delimiterN/A
s3:max-keysN/A
s3:PutBucketAcls3:x-amz-acl s3:x-amz-grant-<perm>

Supported S3 Object Operations

PermissionCondition Keys
s3:PutObjects3:x-amz-acl and s3:x-amz-grant-<perm>
s3:x-amz-copy-sourceN/A
s3:x-amz-server-side-encryptionN/A
s3:x-amz-server-side-encryption-aws-kms-key-idN/A
s3:x-amz-metadata-directiveUse PUT and COPY to overwrite or preserve metadata in COPY requests, respectively
s3:RequestObjectTag/<tag-key>N/A
s3:PutObjectAcls3:x-amz-acl and s3-amz-grant-<perm>
s3:PutObjectVersionAcls3:x-amz-acl and s3-amz-grant-<perm>
s3:ExistingObjectTag/<tag-key>N/A
s3:PutObjectTaggings3:RequestObjectTag/<tag-key>
s3:PutObjectVersionTaggings3:RequestObjectTag/<tag-key>
s3:ExistingObjectTag/<tag-key>N/A
s3:GetObjects3:ExistingObjectTag/<tag-key>
s3:GetObjectVersions3:ExistingObjectTag/<tag-key>
s3:GetObjectAcls3:ExistingObjectTag/<tag-key>
s3:GetObjectVersionAcls3:ExistingObjectTag/<tag-key>
s3:GetObjectTaggings3:ExistingObjectTag/<tag-key>
s3:GetObjectVersionTaggings3:ExistingObjectTag/<tag-key>
s3:DeleteObjectTaggings3:ExistingObjectTag/<tag-key>
s3:DeleteObjectVersionTaggings3:ExistingObjectTag/<tag-key>
Note

When using AWS SDKs, the variable AWS_REGION is defined within the V4 signature headers. The object storage region for CoreWeave is named default.

Bucket policies

Another access control mechanism is bucket policies, which are managed through standard S3 operations. A bucket policy may be set or deleted by using s3cmd, as shown below.

In this example, a bucket policy is created to make the bucket downloads public:

Example
$
cat > examplepol
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::happybucket/*"
]
}
]
}

The policy is then applied using s3cmd setpolicy:

Example
$
s3cmd setpolicy examplepol s3://happybucket

Once the policy is applied, the data in your bucket may be accessed without credentials, for example, by using curl:

Example
$
curl -v https://happybucket.object.las1.coreweave.com/my-new-file.txt

Finally, the policy is deleted using s3cmd delpolicy:

Example
$
s3cmd delpolicy s3://happybucket
Note

Bucket policies do not yet support string interpolation.

Frequently asked questions

Are buckets accessible between tokens?

Yes. All Object Storage tokens for an organization can access all buckets in the organization. Tokens can have different access levels.

Is data deleted when the tokens are deleted?

No. Even when all tokens in the organization are deleted, the data is untouched. To delete all data, use a tool like s3cmd or rclone to purge the buckets.

Pricing

The current price for Object Storage is $0.03 per GB per month.