Get Started with CoreWeave
What is CoreWeave Cloud?
CoreWeave is a specialized cloud, purpose-built for GPU-accelerated workloads. We run a fully managed, bare metal serverless Kubernetes infrastructure to deliver the best performance in the industry while reducing your DevOps overhead.
What does that mean, and how do we do it?
- Broadest range of NVIDIA GPUs: Optimize your performance and cost with the best GPU selection for your use case. Custom configure your instances for Generative AI, Machine Learning, Large Language Model (LLM) inference, VFX rendering, and Pixel Streaming at scale.
- Fully managed: We remove the Kubernetes management burden. We manage the control-plane, Node scheduling, scaling, and cluster administration so you can focus on deploying your jobs with standard Kubernetes tools, workload managers like Slurm and Argo Workflows, or our Cloud UI. Anything you can run in a Docker container, you can run on CoreWeave.
- Bare metal: Your jobs run on bare metal Nodes without a hypervisor. Nothing is virtualized, so there are no oversubscribed shared resources. You choose the resources your workload requires, and they are then dedicated to your Pods. Billing is priced by the hour and billed by the minute. You get the resources you ask for and only pay for what you use.
- Serverless Kubernetes: CoreWeave Cloud combines the benefits of serverless architecture with the fast and reliable performance of Kubernetes. Clients can run their own code, manage data, and integrate applications without having to manage any infrastructure. With Knative, clients can also autoscale across hundreds to thousands of GPUs and scale to zero based on user demand.
- Networking: CoreWeave's Cloud Native Networking uses managed Kubernetes design principles to move firewall and load-balancing functions directly onto the network fabric. Our NDR InfiniBand fabric delivers 3.2Tbps of non-blocking network bandwidth per Node for direct GPU-to-GPU communication. Single and multi-region Layer 2 VPCs are also available for specific use-cases.
- Storage: Share our high-performance NVMe File System Volumes with multiple Pods to deliver up to 10 million IOPS per volume, powering workloads for distributed ML training & fine-tuning, VFX rendering, batch processing for life sciences, and pixel streaming for the metaverse. Accelerated Object Storage, when combined with CoreWeave's Tensorizer, can load PyTorch inference models in less than five seconds.
This is how we run compute-intensive workloads at scale.
Get started
If you don't have a CoreWeave account for your business, reach out to our Sales Team to get started. Our Quickstart guide explains how to add user accounts, manage namespaces, access the Billing Portal, and other common administrator tasks.
GPU-powered solutions
Learn how to use Kubernetes, workflow managers, and our Cloud UI to deploy solutions for these use cases.
Machine Learning and AI
- Train ML models on our H100 Supercomputer with InfiniBand, which holds the MLPerf record with performance 29x faster than the next best competitor.
- Fine-tune Large Language Models across multiple GPUs using Argo Workflows.
- Maximize generative AI inference performance with popular frameworks and autoscale from zero to 1,000s of GPUs.
- Use CoreWeave's Tensorizer to load models in less than five seconds from accelerated Object Storage.
VFX and Rendering
- CoreWeave's Conductor is the "easy button" for cloud rendering at virtually unlimited scale.
- Deploy artist workstations on the industry's most powerful range of GPUs for graphic workloads.
Batch Processing for Life Sciences
- Use Argo Workflows to orchestrate thousands of parallel simulations for drug discovery.
- Access CoreWeave's burst compute capability with Kubeflow Training Operators and frameworks like TensorFlow, PyTorch, MXNet, XGBoost, or MPI to power large-scale molecular dynamics simulations.
Metaverse and Pixel Streaming
- Deliver streaming interactive, web-based 3D content with PureWeb and TensorWorks' Scalable Pixel Streaming.
Fully managed means fully supported
Our support team manages the infrastructure and reduces the DevOps burden by partnering with your team, freeing you to focus on what you do best — your applications and deployments.
🎉 What's new?
See the release notes for the latest updates and features.