Julia is hailed as the language suitable for scientific computing and machine learning, which is now being adopted all across the world. While programmers have been using Python for about 30 years, Julia has also been showing up in numerous language popularity rankings even though it made its debut in 2012. Julia Computing has been at work to make sure the programming language is accessible and easy to deploy for artificial intelligence and machine learning. A critical piece of the AI story is the hardware, and Julia Computing is making advances in that space as well.
While backing for native GPU computing has been accessible in the Julia programming language for a long time, yet with the arrival of Julia 1.0 a year ago, at last, arrived at dependability and across the board use.
To give it a boost for usage in GPUs, Julia Computing has announced the accessibility of the programming language as a pre-bundled container on the NVIDIA GPU Cloud (NGC) container registry. This native support is expected to advance the usage of Julia on GPUs as NGC offers a thorough inventory of GPU-accelerated programming for deep learning, AI, and HPC. By dealing with the plumbing, NGC empowers users to concentrate on building lean models and assembling quicker bits of insights.
In contrast to numerous other programming languages, Julia uncovered not just high-level access to GPU-accelerated array natives, yet additionally enables coders to write custom GPU kernels. This leverages the full power and adaptability of the underlying hardware without switching languages. This capacity likewise allows engineers to effectively re-use and move code from CPU-based applications to the GPU, bringing down the barrier to entry and quickening the time to solution.
Julia’s GPU Support For Variety Of AI Apps
Julia’s GPU support can be used for a tremendously wide range of applications from AI to tracking environmental change. Present-day AI would be incomprehensible without the computational intensity of GPUs. For example, users of the Flux.jl AI library for Julia can exploit GPUs with a one-line change, with no extra code adjustment. Likewise, Julia’s differentiable programming support is completely GPU-compatible giving GPU acceleration to models at the front line of AI research, with the capability of scaling from a single user with a GPU in their PC to thousands of GPUs on the biggest supercomputers.
Yet, the utilisation of Julia on GPUs is a lot more extensive than merely machine learning. Pumas AI utilises Julia’s GPU support to compute personalised drug dosing regimens, using the DifferentialEquations.jl suite of solvers – likely the most far-reaching suite of differential equations solvers in any language. Since GPUs are a native target for Julia, running these solvers on GPUs requires minimal changes.
Julia was likewise used in a massively parallel multi-GPU solver for spontaneous nonlinear multi-physics flow localisation in 3-D by Stanford University and the Swiss national supercomputing centre. The work was displayed at JuliaCon 2019 not long ago. Here, Julia supplanted a legacy framework written in MATLAB and CUDA C, tackling the “two language issue” by permitting both high-level code and GPU kernels to be communicated in a similar language and share the same code base.
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.