IT News & Events

News about IT at Indiana University and the world

Menu

Carbonate’s deep learning nodes: Building the future of AI research

From decoding genomes to analyzing the contents of thousands of images and videos, artificial intelligence (AI) is redefining research. At Indiana University (IU), the Deep Learning (DL) Resource on Carbonate provides processing power and specialized support for over 100 projects across a wide range of fields that engage the potential of AI.

Scott Michael, manager of Research Applications and Deep Learning, UITS Research Technologies

Starting in June 2019 with a 12-node expansion of the Carbonate supercomputing cluster, the DL resource has delivered 759,688 core hours, 92,783 GPU hours for over 130 projects. “With its uniquely capable V100 GPUs, this resource gives IU’s researchers the ability to get ahead of the curve with their research using AI techniques,” said Scott Michael, manager of Research Applications and Deep Learning. The expansion is part of Research Technologies’ effort to meet new interest in AI. “We wanted to capitalize on that interest and give people who were very keen on putting in NSF or NIH proposals that involved deep learning a platform to conduct that research on,” said Michael.

Deep learning nodes enable researchers to develop “neural networks” or complex ways of processing large quantities of information. Samantha Wood, a researcher in the Informatics Department, is creating a visual-based toolkit for researchers without a computer science background to enable more access to AI and Deep Reinforcement Learning (DRL). “DRL models are so successful because they mimic the biological process of learning,” said Wood in her proposal. “Agents start with untrained ‘brains,’ then develop their own curriculum of learning through their interactions with the environment. The DRL agents are equipped with a full ‘brain’ for processing stimuli, making associations, encoding memories, and performing actions. As a result, these models can also function as biologically-inspired, formal models of cognition,” she said.

Similar to a human brain, AI’s neural networks are adaptable to virtually any set of information “If you are interested in analyzing: stop signs, for example” said Michael. “Then you can set up rules within the neural network that cover all angles of viewing, illumination, and different settings in a much more comprehensive way,” he said. “So while humans can scan through thousands of pictures and classify them, once you have the trained model, then the prediction is much faster and more efficient.”  

This technique also has applications in genomics. “If you were trying to find a very particular mutation in a genome and you knew precisely its location, then it would be less computationally expensive to just scan the genome and detect that particular mutation,” Michael explained. Still, the genome is much more complex than a stop sign, and requires more complex models to identify mutations accurately. Michael says ideally, a copy of a genome could be uploaded to a neural network, and it would provide an accurate reading of mutations. Researchers creating such networks on Carbonate’s DL resource haven’t completed a trained model yet, but it is on the horizon. “The end goal is to provide services that very accurately and rapidly classify data,” he said.