CoreWeave, a cloud provider specialising in large-scale GPU-accelerated workloads, has announced that it has raised $221 million in Series B funding. The funding round was led by alternative asset manager Magnetar Capital, with contributions from NVIDIA, Nat Friedman, and Daniel Gross. CoreWeave intends to use the new funding to expand its specialised cloud infrastructure for compute-intensive workloads, including artificial intelligence and machine learning, visual effects and rendering, batch processing, and pixel streaming. The company aims to meet the high demand for generative AI technology.
This funding will allow CoreWeave to offer purpose-built, customised solutions that can outperform larger, more generalized cloud providers. The funds will also support the company’s US-based data center expansion, with the opening of two new centers this year, bringing the total to five North American-based data centers.
CoreWeave is positioning itself to power the booming AI technology sector with its ability to innovate and iterate more quickly than the larger hyperscalers. The company’s CEO and co-founder, Michael Intrator, stated that Magnetar’s strong, continued partnership and financial support as lead investor in this Series B round ensures that they can maintain their momentum without skipping a beat. Additionally, they are thrilled to expand their collaboration with the team at NVIDIA, whose vision and guidance will be invaluable as they continue to scale their organisation.
NVIDIA has released the highest-performance data center GPU, the NVIDIA H100 Tensor Core, along with the NVIDIA HGX H100 platform. CoreWeave has announced that its HGX H100 clusters are live and currently serving clients such as Anlatan, the creators of NovelAI. In addition to HGX H100, CoreWeave offers more than 11 NVIDIA GPU SKUs interconnected with the NVIDIA Quantum InfiniBand in-network computing platform, which are available to clients on demand and via reserved instance contracts.
Manuvir Das, Vice President of Enterprise Computing at NVIDIA, stated that AI has reached an inflection point, and accelerated interest in AI computing infrastructure is being observed from startups to major enterprises. CoreWeave’s strategy of delivering accelerated computing infrastructure for generative AI, large language models, and AI factories will help bring the highest-performance, most energy-efficient computing platform to every industry.
Ernie Rogers, Magnetar’s Chief Operating Officer, stated that with the seemingly limitless boundaries of AI applications and technologies, the demand for compute-intensive hardware and infrastructure is higher than ever. CoreWeave’s innovative, agile, and customisable product offering is well-situated to service this demand, and the company is experiencing explosive growth to support it. Magnetar is proud to collaborate with NVIDIA in supporting CoreWeave’s next phase of growth as it continues to bolster its already strong positioning in the marketplace.