AMPD Ventures has announced a ‘Machine Learning Cloud’ initiative featuring AMD Instinct MI100 accelerators along with the AMD ROCm open software platform. Designed to cater to companies’ requirements in the AI, ML and deep learning sectors, the platform will initially be hosted at AMPD’s DC1 data centre in Vancouver, British Columbia, but is soon expected to expand into other territories.
Termed as the world’s fastest high-performance computing GPU, AMD Instinct MI100 was launched in November 2020 and was designed on AMD’s all-new AMD CDNA architecture. This led to the accelerator becoming the industry’s first data centre GPU to exceed 10 Teraflops (FP64), stated the company in its official release. And AMPD Ventures is now the first company in Canada to leverage these accelerators’ power on a service basis.
In order to develop data centre deep learning solutions, AMD has been combining its AMD ROCm open software platform, designed to boost advanced computing to solve real-world problems, with its powerful AMD Instinct accelerators. According to the official release, it is the first open-source exascale-class platform for accelerated computing built to be independent of any programming language. It unlocks various open compute languages, compilers, libraries, and tools designed from the ground up to meet AI scientists and researchers’ demanding needs. Such advancements will also help in accelerating code development and solve the toughest challenges in the world today.
AMPD also offers these exceptionally powerful compute resources to the academic artificial intelligence community in British Columbia, along with its programs for commercial customers who wish to assess their AI compute options in a low-risk environment.
Talking about this partnership, Brad McCredie, corporate vice president, Data Center GPU and Accelerated Processing, AMD, stated that the two companies’ shared commitment toward the open-source community and open-source technologies is driving ROCm platform innovations.
AMD’s industry-differentiating approach to accelerated compute and heterogeneous workload development provides users with unprecedented flexibility, choice, and platform autonomy. This partnership will allow AMPD to avail advantages on a service basis, added McCredie.
In addition to this, both the companies are working with the community via the ROCm GitHub forums by providing guidance and support for a heterogeneous-computing interface for portability, and tools to ease the conversion of CUDA applications to portable C++ code, stated in the official release.
Speaking on the initiative, James Hursthouse, Chief Strategy Officer at AMPD, stated, considering British Columbia is home to some 200 applied AI companies, leading academic and research programs, this new cutting-edge GPU-based hosted environment will yield positive outcomes.
AMPD is also working towards building AI and ML applications in other AMPD core sectors, such as the visual effects industry.