Advertisement

Scientists use NVIDIA GPU to mimic living cell 

NVIDIA GPUs replicated 7,000 genetic information in 20 minutes, scientists believe it is the longest and most complex cell simulation to date.

Molecular biology’s goal is to describe the basic processes of life concerning the laws of physics and chemistry. In 1984, Harold Morowitz put forward the study of the simplest living cells called the mycoplasmas as models that can define life’s basic principles.

Scientists from the University of Illinois, Urbana-Champaign, built a 3D simulation replicating a living cell’s physical and chemical properties. They were interested in studying “minimal cells” and designed and built cellular genomes that do not include non-essential genes in the laboratory. Instead, it contained genes necessary for the cell’s existence, replication, and functions.

NVIDIA GPUs were replicating around 7,000 genetic information processes in 20 minutes of the cell cycle. The scientists believe it is the longest and most complex cell simulation to date. Usually, minimal cells are way simpler than naturally occurring ones, making them easier to recreate digitally. After further tests, this model can help scientists foretell that changes in condition and genomes will affect its functions. Although, at this stage, the minimal cell gives insights into physical and chemical processes, which lays the foundation of living cells. The genetically minimal bacterial cell has been termed JCVI-syn3A

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

How did they pull it off?

The Illinois researchers used Lattice Microbes, a GPU-accelerated software co-developed by Luthey-Schulte, and deployed it to mimic the 3D minimal cell. First, they simulated one of the simplest living cells called mycoplasma to build the living cell model. This model was based on a version of the mycoplasma cell synthesised by scientists from J. Craig Venter Institute in La Jolla, California, with under 500 genes.

Luthy-Schulten’s team used the properties of the mycoplasma’s inner functions, such as amino acids, nucleotides, lipids and small molecule metabolites, to build the model using DNA, RNA, proteins, and membranes.


Download our Mobile App



The microbes software was used on NVIDIA Tensor Core GPUs, while the researchers ran a continuous simulation before replicating its DNA. The model found out that the cell used a larger chunk of its energy to transport molecules between the membranes, which fits the lines of a parasitic cell. The project’s lead author, Zane Thornburg, is also working on a GPU-accelerated project which will be used to show growth and cell division in 3D.

The team has also recently received NVIDIA DGX systems and RTX A5000 GPUs to complete the process. They found out that using A5000 GPUs sped up the benchmark simulation time by 40% compared to an old-gen NVIDIA GPU.

Why use a GPU?

Developing detailed models of complicated biological systems is a key requirement for integrating the expanding amount of experimental data. In addition, biochemical models pave an easy way to test different experimental conditions, which leads to discovering the dynamics of the biological systems.

However, the computational power required for these simulations is higher than what exists in our desktops. Thus, expensive and high-performance computing solutions are mandatory. Recently, an alternative has emerged in general-purpose scientific computing on graphics processing units (GPGPU). It offers more power to a small computer and costs as low as $400. It is widely used today because computing with a GPU needs a specific set of algorithms. GPU computing was developed for graphics-intensive computational problems like 3D rendering and gaming. It is applied to various domains like complex modelling, simulation, and fast-track research.

Funnily, you can directly link gaming to biology research because while gaming, the output on your computer screen is made up of independent mathematical calculations. These calculations can be done one step or divided the screen into parts to calculate simultaneously. This is the GPU’s speciality, i.e., doing multiple identical calculations simultaneously while being independent of the others. These calculations are used in various researches to mimic a living organism and uncover the mysteries behind its functions. Meanwhile, the development in the gaming industry to match the gamers’ expectations improved the quality of GPUs.

GPUs are ideal for implementing program execution with various data elements. This process is called data parallelism, and it maps data elements to parallel threads available in the GPU. The areas focused on data parallelism are 3D rendering, stereo vision, pattern recognition, image, video, and medical industry applications.

More Great AIM Stories

Akashdeep Arul
Akashdeep Arul is a technology journalist who seeks to analyze the advancements and developments in technology that affect our everyday lives. His articles primarily focus upon the business, cultural, social and entertainment side of the technology sector.

AIM Upcoming Events

Regular Passes expire on 3rd Mar

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 17th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, Virtual
Deep Learning DevCon 2023
27 May, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES

What went wrong with Meta?

Many users have opted out of Facebook and other applications tracking their activities now that they must explicitly ask for permission.