\n\nThe new technological era is one where task-specific hardware and software are on the rise. This year at Google I\/O 2018, Google launched a new generation of Tensor Processing Unit (TPU), already in use to turbocharge a set of products. Now the MountainView search giant has announced enhanced Julia capabilities to the TPU ecosystem. To remain relevant in the new era, Julia Computing has developed a method for running suitable sections of Julia programs to TPUs using an API and the Google XLA compiler. This development has added more options alongside Tensorflow to leverage Google Cloud.\n\nGoogle CEO Sundar Pichai said that the new lines of TPUs were around eight times more powerful than previous editions. The added Julia power will help Google Cloud to reach out to a bigger pool of developers and data scientists who use a combination of Julia and machine learning. Matrix heavy operations have been made easy by heavy compute available on Graphical Processing Units.\n\nJulia And Machine Learning\n\nJulia is a high-level programming language which was specially designed for numerical analysis and computational science and it can also be used for server web use or as a specification language. Julia has been used in a concurrent, parallel and distributed computing of C and other low-level programming languages inside the code.\n\nIt also comes packed with Flux \u2014 a powerful framework for machine learning and AI tasks. Today, ML has become increasingly complex and there is an imminent need for differentiable languages where code can be used to represent algorithms. Hence in the modern world, Julia\u2019s syntax is just the right way to express algorithms. Google Cloud TPU\u2019s support for Julia is also a recognition of the popularity of the language and its utility in modern machine learning. \u00a0\n\nhttps:\/\/twitter.com\/JeffDean\/status\/1054951415339192321\n\nSpecially adapted for neural networks, Julia Flux also has layer stacking based interface for simple neural network models and can also handle variational auto-encoders and other complex networks. Interestingly Julia also supports many favourite frameworks such as Tensorflow and MXNet hence fixing itself a place in the data science toolkit and workflow.\n\nAccording to Viral Shah, co-creator, \u201cApart from C and CUDA from NVIDIA, Julia is the only widely used language which has natively built CUDA code generation. So you can write your code in Julia and deploy them to CPUs without knowing any C or C++\u201d\n\nGoogle Announces XLA Compiler\n\nJulia Computing wants to expand its services and offerings and become available at a large number of workflows. In the middle of 2017, Julia computing raised $4.6M in seed funding from General Catalyst and Founder Collective investors. Julia is one of the modern high-performance computing startups and wants to grow fast. It has also evolved as top 10 programming languages with more than 1 million downloads.\n\nIn 2018 when Google made the big cloud announcements all features were tuned towards Tensorflow only. In September of 2018, much to the joy of Julia creators, Google opened up the access to Google Cloud. The XLA (\u201cAccelerated Linear Algebra\u201d) is a partially open-source compiler project released by Google and has rich IR for telling the GPU about the algebraic operations. XLA takes arrays of basic data types and also tuples. High-level operations include basic arithmetic, generalized linear algebra operations, high-level array operations, special functions and some basic distributed computation operations.\n\nJulia Computing CTO Keno Fischer, was\u00a0cited in the paper, \u201cGoogle opened up access to TPUs via the IR of the lower level XLA compiler. This IR is general purpose and is an optimizing compiler for expressing arbitrary computations of linear algebra primitives and thus provides a good foundation for targeting TPUs by non-Tensorflow users as well as for non-machine learning workloads.\u201d\n\nAdapting Julia To Cloud TPU\n\nEach high-level operation has two kinds of operands:\n\n\n Static operands where values need to be available at compile time. Additionally few of these operands may reference other computational modules that are part of the same HLO module.\n Dynamic operands consist of tensors. They need not be available at compile time.\n\n\nResearchers outline methods that use Julia middle-end compiler to determine sufficiently precise information. This information is used in sufficiently large subregions of the program to amortize any launch overhead. They also emphasised in the paper, \u201cWe now have the ability to compile Julia programs to XLA, as long as those programs are written in terms of XLA primitives. Julia programs not, however, written in terms of arcane HLO operations; they are written in terms of the functions and abstractions provided by Julia\u2019s base library. Luckily Julia\u2019s use of multiple dispatches makes it easy to express how the standard library abstractions are implemented in terms of HLO operations. \u201d\n\nIt will be great for many data scientists to have the power to compile Julia to Google Cloud\u2019s XLA hence enabling offload to TPU devices.