Skip to main content

Access to Google Cloud Storage from CKS Pods

Configure Secure Access to Google Cloud Storage from CKS Pods Using OIDC Workload Identity Federation

This tutorial demonstrates how to configure a CoreWeave Kubernetes Service (CKS) cluster to authenticate to Google Cloud Storage (GCS) using OIDC Workload Identity Federation. A Kubernetes ServiceAccount in CKS will be able to access GCS directly, without any stored credentials.

Overview

CKS issues OIDC-compliant ServiceAccount tokens to pods. These tokens can be used to authenticate to external services like Google Cloud Platform (GCP) by establishing OIDC trust. This eliminates the need for long-lived credentials and allows per-ServiceAccount scoping of access to cloud resources.

Benefits of this approach:

  • No credentials are stored in secrets or container images.
  • Tokens are short-lived and rotated automatically by Kubernetes.
  • IAM permissions can be tightly scoped to individual ServiceAccounts.
  • No TLS thumbprint management is required, as with AWS.

After completing this tutorial, you'll have the following setup:

  1. Pod requests access: The gcs-client pod uses its mounted OIDC token
  2. GCP validates identity: Google Cloud verifies the token against your CKS cluster's OIDC endpoint
  3. Impersonation granted: GCP allows the pod to impersonate the gcs-reader service account
  4. Resource access: The pod can read from your GCS bucket using temporary credentials

Prerequisites

Before you begin, ensure you have the following:

CoreWeave requirements

  • CKS cluster: A CoreWeave Kubernetes Service cluster with OIDC Workload Identity enabled
  • Cluster access: kubectl configured to access your CKS cluster
  • Cluster details: Your cluster's OIDC issuer URL (we'll show you how to find this)

Google Cloud Platform requirements

  • GCP project: A Google Cloud Platform project where you'll configure Workload Identity
  • GCP permissions: Your GCP account must have the following IAM roles:
    • Workload Identity Pool Admin (to create pools and providers)
    • Service Account Admin (to create and manage service accounts)
    • Project IAM Admin (to bind service accounts to workload identities)
  • GCS bucket: A Google Cloud Storage bucket for testing (or permission to create one)

Command line tools

  • gcloud CLI: Google Cloud SDK installed and authenticated
  • kubectl: Kubernetes command line tool configured for your CKS cluster

GCP project information

You'll need these values during the tutorial. Gather them beforehand:

  • Project ID: Your GCP project ID (for example, my-project-123)
  • Project Number: Your GCP project number (numeric, for example, 123456789012)

To find your project details:

Example
$
# Get both project ID and number
gcloud projects describe $(gcloud config get-value project)

Verify your setup

Test that everything is configured correctly:

Example
$
# Verify gcloud authentication and project ID
gcloud auth list
gcloud config get-value project
# Verify kubectl access to your CKS cluster
$
kubectl get nodes
# Verify you have necessary GCP permissions
$
gcloud iam workload-identity-pools list --location=global

If any of these commands fail, resolve the authentication or permission issues before proceeding.

Set up Kubernetes resources

Before configuring GCP, you need to create the Kubernetes namespace and ServiceAccount that will be granted access to GCS.

Create the namespace

Create a namespace called foo where your workloads will run:

Example
$
kubectl create namespace foo

Create the ServiceAccount

Create a ServiceAccount called bar that your pods will use:

Example
$
kubectl create serviceaccount bar --namespace foo

Verify the resources

Confirm both resources were created successfully:

Example
$
# Verify the namespace exists
$
kubectl get namespace foo
# Verify the ServiceAccount exists
$
kubectl get serviceaccount bar --namespace foo
# View the ServiceAccount details (including any tokens)
$
kubectl describe serviceaccount bar --namespace foo

Expected output should show:

  • Namespace foo in Active status
  • ServiceAccount bar exists in the foo namespace
  • ServiceAccount has default token secrets (these will be replaced with projected OIDC tokens)

Understanding the mapping

These Kubernetes resources will map to GCP identities as follows:

  • Namespace: foo → GCP attribute attribute.k8s_ns=foo
  • ServiceAccount: bar → GCP attribute attribute.k8s_sa=bar

When you configure the GCP Workload Identity binding, you'll reference this specific combination (foo/bar) to ensure only pods running with this ServiceAccount in this namespace can access your GCS resources.

Tip

You can use different namespace and ServiceAccount names, but make sure to update all the GCP commands accordingly. The tutorial uses foo/bar as an example, but in production you'd typically use more descriptive names like data-pipeline/gcs-reader.

Get your OIDC issuer URL from CKS

To use identity federation, you must know the OIDC Issuer URL of your CKS cluster. This is the base URL from which token metadata and keys are served. It's formatted as a valid HTTPS URL, such as: https://oidc.<region>.coreweave.cloud/<cluster-id>.

There are a few ways to obtain it:

  1. In the CKS Console, navigate to the cluster details page.
  2. Click the name of the cluster to expand the cluster details panel.
  3. Under the Workload Identity section, the OIDC Issuer URL will be displayed.

Create a workload identity pool in GCP

  1. Create a pool to represent trusted external identities (your CKS pods):

    Example
    $
    gcloud iam workload-identity-pools create k8s-pool \
    --location="global" \
    --display-name="CKS Pool"
  2. Confirm the pool was created successfully:

    Example
    $
    gcloud iam workload-identity-pools describe k8s-pool --location=global

    Expected output should show:

    • state: ACTIVE
    • name: projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/k8s-pool

    If this fails, check that you have the Workload Identity Pool Admin role and are authenticated to the correct GCP project.

Create an OIDC provider in the pool

This setup allows GCP to map Kubernetes tokens to identities based on their namespace and ServiceAccount name:

  1. Configure the provider to trust your CKS OIDC issuer and extract identity information from tokens. Replace <REGION> and <CLUSTER_ID> with your actual values.

    Example
    $
    gcloud iam workload-identity-pools providers create-oidc k8s-provider \
    --location="global" \
    --workload-identity-pool="k8s-pool" \
    --display-name="CKS OIDC Provider" \
    --issuer-uri="https://oidc.<REGION>.coreweave.cloud/<CLUSTER_ID>" \
    --attribute-mapping="google.subject=assertion.sub,attribute.k8s_ns=assertion.kubernetes.io/serviceaccount/namespace,attribute.k8s_sa=assertion.kubernetes.io/serviceaccount/name"
  2. Check that the provider was configured correctly:

    Example
    $
    gcloud iam workload-identity-pools providers describe k8s-provider \
    --location=global \
    --workload-identity-pool=k8s-pool

    Expected output should include:

    • state: ACTIVE
    • Your CKS OIDC issuer URL in the issuerUri field
    • The attribute mapping you configured

Create a Google Cloud Service Account (GSA) and grant access

  1. Create the service account that will be impersonated by CKS workloads:

    Example
    $
    gcloud iam service-accounts create gcs-reader \
    --display-name="CKS GCS Reader"
  2. Grant it permission to read from GCS. Replace <PROJECT_ID> with your actual project ID.

    Example
    $
    gcloud projects add-iam-policy-binding <PROJECT_ID> \
    --member="serviceAccount:gcs-reader@<PROJECT_ID>.iam.gserviceaccount.com" \
    --role="roles/storage.objectViewer"

Bind the CKS service account to the GSA

  1. Authorize the Kubernetes ServiceAccount bar in namespace foo to impersonate the GSA via the identity pool. Replace <PROJECT_NUMBER> and <PROJECT_ID> with your actual values.

    Example
    $
    gcloud iam service-accounts add-iam-policy-binding gcs-reader@<PROJECT_ID>.iam.gserviceaccount.com \
    --role="roles/iam.workloadIdentityUser" \
    --member="principalSet://iam.googleapis.com/projects/<PROJECT_NUMBER>/locations/global/workloadIdentityPools/k8s-pool/attribute.k8s_ns/foo/attribute.k8s_sa/bar"
  2. Check that the binding was created correctly. Replace <PROJECT_ID> with your actual project ID.

    Example
    $
    gcloud iam service-accounts get-iam-policy gcs-reader@<PROJECT_ID>.iam.gserviceaccount.com

    Expected output should include a binding with:

    • role: roles/iam.workloadIdentityUser
    • members containing your principalSet://iam.googleapis.com/projects/... entry

Only tokens issued to the foo/bar ServiceAccount will be permitted to impersonate the gcs-reader account.

Prepare a test GCS bucket

Before testing the authentication, create a GCS bucket or use an existing one:

Example
# Create a test bucket (bucket names must be globally unique)
$
gsutil mb gs://my-cks-test-bucket-$(date +%s)
# Add a test file
$
echo "Hello from CKS!" | gsutil cp - gs://my-cks-test-bucket-$(date +%s)/test.txt

If bucket creation fails, check the following:

  • Bucket names must be globally unique - try adding a timestamp or random suffix.
  • Ensure you have Storage Admin permissions in your GCP project.

Use projected OIDC token in pod and access GCS

In your workload, configure a projected service account token with the proper audience.

Create a pod YAML file called gcs-client-pod.yaml with the following content, filling in the placeholder values for <PROJECT_NUMBER>, <PROJECT_ID>, and <YOUR_BUCKET_NAME>.

gcs-client-pod.yaml
$
apiVersion: v1
kind: Pod
metadata:
name: gcs-client
spec:
serviceAccountName: bar
containers:
- name: gcs
image: google/cloud-sdk:slim
command:
- bash
- -c
- |
gcloud auth workload-identity-pools generate-credentials \
--workload-identity-pool-resource="projects/<PROJECT_NUMBER>/locations/global/workloadIdentityPools/k8s-pool" \
--provider="k8s-provider" \
--service-account="gcs-reader@<PROJECT_ID>.iam.gserviceaccount.com" \
--oidc-token-file=/var/run/secrets/tokens/oidc-token \
> /tmp/creds.json && \
gcloud auth login --cred-file=/tmp/creds.json && \
gsutil ls gs://<YOUR_BUCKET_NAME>
volumeMounts:
- name: oidc-token
mountPath: /var/run/secrets/tokens
- name: creds
mountPath: /tmp
volumes:
- name: oidc-token
projected:
sources:
- serviceAccountToken:
path: oidc-token
audience: //iam.googleapis.com/projects/<PROJECT_NUMBER>/locations/global/workloadIdentityPools/k8s-pool/providers/k8s-provider
expirationSeconds: 3600
- name: creds
emptyDir: {}

This pod uses the projected token to generate GCP-compatible credentials dynamically at runtime, then uses them to read from GCS.

Deploy and test the pod

  1. Apply the pod configuration:

    Example
    $
    kubectl apply -f gcs-client-pod.yaml -n foo
  2. Wait for the pod to start and check its status:

    Example
    $
    kubectl get pod gcs-client -n foo
    kubectl logs gcs-client -n foo

If the pod fails, check the following:

  • Error 403: Permission denied: Check that the gcs-reader service account has access to your bucket.
  • Invalid token: Verify your OIDC issuer URL matches your cluster's actual endpoint.
  • Pod won't start: Ensure the bar ServiceAccount exists in the foo namespace.

Debug commands:

Example
# Check if token is being mounted
$
kubectl exec gcs-client -n foo -- ls -la /var/run/secrets/tokens/
# View detailed pod events
$
kubectl describe pod gcs-client -n foo

Clean up resources

If you're done testing, remove the resources to avoid charges and maintain security:

Remove Kubernetes resources

Example
# Delete the test pod
$
kubectl delete pod gcs-client -n foo
# Delete the ServiceAccount (optional, if you're not using it elsewhere)
$
kubectl delete serviceaccount bar -n foo
# Delete the namespace (optional, will remove everything in it)
$
kubectl delete namespace foo

Remove GCP resources

Example
# Remove the IAM binding
$
gcloud iam service-accounts remove-iam-policy-binding gcs-reader@<PROJECT_ID>.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="principalSet://iam.googleapis.com/projects/<PROJECT_NUMBER>/locations/global/workloadIdentityPools/k8s-pool/attribute.k8s_ns/foo/attribute.k8s_sa/bar"
# Delete the Google Cloud Service Account
$
gcloud iam service-accounts delete gcs-reader@<PROJECT_ID>.iam.gserviceaccount.com
# Delete the OIDC provider
$
gcloud iam workload-identity-pools providers delete k8s-provider \
--location=global \
--workload-identity-pool=k8s-pool
# Delete the Workload Identity Pool
$
gcloud iam workload-identity-pools delete k8s-pool \
--location=global

Remove test bucket (if created)

Example
# Delete the test bucket and its contents
$
gsutil rm -r gs://<YOUR_BUCKET_NAME>
Deleting shared resources

Deleting the Workload Identity Pool will break authentication for any other applications using it. Only delete shared resources if you're sure they're not needed elsewhere.