IT News & Events

News about IT at Indiana University and the world

Menu
IU's Big Red II Supercomputer

IU Doctoral Student Creates Software to Take Supercomputers to New Speeds

While every IU student uses IU’s email and internet resources, not many are aware of the cutting edge technology that is also here at their disposal. Yet, PhD student Udayanga Wickramasinghe and his PhD advisor prof. Andrew Lumsdaine has made improving this technology the focus of their work.


Doctoral student Udayanga Wickramasinghe

The most prominent high-performance computing (HPC) resource available at IU is supercomputer Big Red II. Big Red II is a cluster computer - a set of thousands of processors which are connected to function as one system. However, while these clusters supply enormous power, using many processors may also slow down the programs. What researchers want is essentially for the minimum amount of processors needed to solve the problem to be used with maximum utilization of resources. This process of dividing and provisioning resources to computing nodes, automatically for maximum efficiency, is a cornerstone of Wickramasinghe's work.A supercomputer is composed of several distinct elements - processors, memory, an i/o system, and an interconnect. The interconnect is what allows communication between the processors themselves and between the other elements. Big Red II has a specific high-speed computing interconnect that connects memory directly without the intervention of processors, which is what Wickramasinghe is building his middleware system on. More specifically, his middleware acts as an infrastructure for programs and other systems to efficiently utilize memory of one or more peer nodes for a particular host and to optimize communication using novel methods.

Wickramasinghe's research contributes to general trend in the supercomputing field towards a new level of speed and efficiency called exascale computing, a technological race. At the moment the fastest systems in the world are on the petascale, meaning that they can do one quadrillion calculations per second. Exascale computers, which are projected to appear in mid-2020s, would be capable of one billion billion calculations per second. Wickramasinghe says that his work will contribute to the transition to exascale computing because “in exascale you are talking about not just a couple of thousands nodes. It can be hundred thousand or a million nodes dispersed across. So, even a tiny improvement in better utilization of resources such as the memory and interconnect, could potentially result in few folds of savings in the higher levels, like in the runtime system and applications.”

While Big Red II is not at the petascale speed, it has more than enough capability as a large cluster computer to test systems that would also work on even larger machines. The large scale and cutting edge software that it offers is what makes research like his possible. In Summer 2018, Wickramasinghe will be presenting his work at “Workshop on Advances in High-Performance Algorithms Middleware and Applications (AHPAMA 2018)” held in conjunction with IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid) in Washington, DC and at “Workshop for Runtime and Operating Systems for Supercomputers (ROSS 2018)” held in conjunction with ACM International Symposium on High-Performance Parallel and Distributed Computing (HPDC) in Tempe, AZ.