Use Objects and Buckets
Interact with your buckets in Object Storage
This guide explains how to manage objects stored in CoreWeave AI Object Storage buckets using S3-compatible tools, including the AWS CLI, s3cmd, and Boto3. Alternatively, you can use Cyberduck to manage your buckets and objects in a graphical interface. Currently, the Cloud Console does not support managing objects.
To manage versioned buckets with rclone, see Versioned Buckets.
Prerequisites
This guide presumes you have the following:
- An active CoreWeave account with SAML/SSO configured.
- A valid API Access Key and Secret Token key pair.
- Adequate permissions to manage objects in CoreWeave AI Object Storage (for example,
s3:PutObjectands3:DeleteObject). See Object Storage S3 Permissions for more information. - An organization access policy set.
- A properly configured SDK of your choosing.
- Note that the primary endpoint,
https://cwobject.com, requires TLS v1.3. Ensure your S3-compatible tools and OpenSSL use TLS v1.3. - Make sure your S3 configuration uses the correct endpoint URL and has virtual-hosted addressing enabled.
Add objects
- AWS CLI
- s3cmd
- Boto3
Ensure you have the AWS CLI installed and configured.
Use the s3 cp command to copy a file into a bucket addressed using the s3:// scheme.
$aws s3 cp /path/to/my-file s3://my-bucket-name
upload: ./my-important-file.txt to s3://my-bucket-name/my-important-file.txt
Ensure you have s3cmd installed, configured and correctly credentialed.
Use the put command to copy an object into the bucket.
$s3cmd put /path/to/my-file s3://my-bucket-name/
This returns the following output, confirming your object has been stored in your bucket:
$upload: 'my-important-file.txt' -> 's3://my-bucket-name/my-important-file.txt
Ensure you have Boto3 installed, configured, and correctly credentialed before continuing. Open your Python environment and use the following script to upload a file to your S3 bucket:
import boto3# Create the S3 clients3 = boto3.client('s3')# Define your bucket name and the file pathbucket_name = 'my-bucket-name'file_name = '/path/to/my-file`object_name = 'my-important-file' # This is optional. If you don't set a value, it will use the same string as your file_name.# Upload files3.upload_file(file_name, bucket_name, object_name)print(f'File {file_name} uploaded to {bucket_name}/{object_name}')
If this succeeds, a confirmation message is printed.
File /path/to/my-file uploaded to my-bucket-name/my-important-file
Exceeding quota limits
If you try to upload an object to a bucket in an Availability Zone where capacity quota limits have been reached, you will receive an error message:
<Message>The account is write suspended.</Message>
To resolve this, you can request a quota increase.
List buckets and their contents
You can list buckets and their contents using S3-compatible tools such as the AWS CLI, s3cmd, or Boto3. If you're working with versioned buckets, you can use rclone to list buckets and their contents, including delete markers.
- AWS CLI
- s3cmd
- Boto3
If you want to see all of your available buckets, use the ls command:
$aws s3 ls
To list all the objects currently in a bucket, use the ls command to target a bucket path.
$aws s3 ls s3://my-bucket-name
The terminal or command prompt will return a YAML file for your selected bucket, listing all objects within it, their sizes, and their last modified dates.
2024-10-14 15:35:10 123456 my-first-file.txt2024-10-14 16:45:22 234567 another-file-of-mine.txt
To list all your available buckets, use the ls command:
$s3cmd ls
To see the contents of a specific bucket, target the bucket's path using the s3:// scheme.
$s3cmd ls s3://my-bucket-name/
This command will output a list of file paths including the bucket name and file name. For example:
2024-10-14 15:35 123456 s3://my-bucket-name/my-first-file.txt2024-10-14 16:45 234567 s3://my-bucket-name/another-file-of-mine.txt
List all your buckets programmatically in Boto3 using the following script:
import boto3# Create the S3 clients3 = boto3.client('s3')# List all your bucketsresponse = s3.list_buckets()# Output your bucket namesprint("Existing buckets:")for bucket in response['Buckets']:print(f' {bucket["Name"]}')
The following script may be used to view the contents of a specific bucket:
import boto3# Create the S3 clients3 = boto3.client('s3')# Define your bucket namebucket_name = 'my-bucket-name'# List the objects in the specified bucketresponse = s3.list_objects_v2(Bucket=bucket_name)# Output the object keysif 'Contents' in response:print(f"Objects in {bucket_name}:")for obj in response['Contents']:print(f' {obj["Key"]} (Size: {obj["Size"]} bytes)')else:print(f"No objects found in {bucket_name}.")
The resulting output looks similar to the following:
Objects in my-bucket-name:my-first-file.txt (Size: 4500 bytes)another-file-of-mine.txt (Size: 5400 bytes)
Delete an object from a bucket
- AWS CLI
- s3cmd
- Boto3
To delete specific objects from a bucket, use the rm command with the AWS CLI.
$aws s3 rm s3://my-bucket-name/my-important-file.txt
When this succeeds, a confirmation message like this one is printed:
$delete: s3://my-bucket-name/my-important-file.txt
To delete an object from a bucket, use the del command to delete things from a bucket in Object Storage.
$s3cmd del s3://my-bucket-name/my-important-file.txt
If this succeeds, a confirmation message is printed:
Object s3://my-bucket-name/my-important-file.txt deleted
The following script may be used to delete an object from a bucket.
import boto3# Create the S3 clients3 = boto3.client('s3')# Define your bucket name and the object key you're deletingbucket_name = 'my-bucket-name'object_name = 'my-important-file'# Delete the objects3.delete_object(Bucket=bucket_name, Key=object_name)print(f'Object {object_name} deleted from {bucket_name}')
If successful, a success message is printed:
Object my-important-file deleted from my-bucket-name
For more information about accessing and interacting with the contents of your buckets, see the official Amazon documentation for s3 buckets.
Rename an object
Make sure you have the following to rename an object:
s3:PutObjectands3:DeleteObjectpermissions.- A bucket that does not have versioning enabled, either currently or in the past.
- A request scoped to renaming a single object within the same bucket. The source and destination keys must be in the same bucket.
See Rename Objects for more information.
- AWS CLI
- s3cmd
- Boto3
To rename an object, use the aws s3api rename-object command with the AWS CLI.
$aws s3api rename-object --bucket {bucket-name} --key {source-key} --new-key {destination-key}
When this succeeds, a confirmation message is printed:
$rename: s3://my-bucket-name/my-important-file.txt -> s3://my-bucket-name/my-new-file.txt
To rename an object, use the mv command with s3cmd. This command does a copy and delete operation instead of a rename operation.
$s3cmd mv s3://my-bucket-name/my-important-file.txt s3://my-bucket-name/my-new-file.txt
The s3cmd mv command performs a copy and delete operation, not an atomic rename. For atomic rename operations, use the AWS CLI or Boto3 with the RenameObject API.
To rename an object, use the rename_object method with Boto3.
$s3.rename_object(Bucket='my-bucket-name', Key='my-important-file.txt', NewKey='my-new-file.txt')
If successful, a success message is printed:
Object my-important-file.txt renamed to my-new-file.txt