A new programming language for high-performance computers | MIT News

High-performance computing is needed for an ever-increasing number of tasks – such as image processing or various deep learning applications on neural networks – where huge stacks of data have to be traversed and done relatively quickly. , otherwise it could take ridiculous amounts of time. It is widely accepted that in carrying out operations of this type there are inevitable trade-offs between speed and reliability. If speed is the top priority, according to this view, reliability will likely suffer, and vice versa.

However, a team of researchers, based primarily at MIT, are challenging that notion, saying you can, in fact, have it all. With the new programming language, which they wrote specifically for high-performance computing, says Amanda Liu, a second-year doctoral student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), “the speed and accuracy no longer have to compete. Instead, they can go hand in hand in the programs we write.

Liu – along with University of California at Berkeley postdoc Gilbert Louis Bernstein, MIT Associate Professor Adam Chlipala and MIT Assistant Professor Jonathan Ragan-Kelley – described the potential of their recently developed creation, “A Tensor Language” (ATL ), last month at the Fundamentals of Programming Languages ​​conference in Philadelphia.

“Everything in our language,” says Liu, “aims to produce either a single number or a tensor.” Tensors, in turn, are generalizations of vectors and matrices. Whereas vectors are one-dimensional objects (often represented by individual arrows) and matrices are familiar two-dimensional arrays of numbers, tensors are not-dimensional arrays, which could take the form of a 3x3x3 array, for example, or something of even higher (or lower) dimensions.

The interest of an algorithm or a computer program is to initiate a particular calculation. But there may be many different ways to write this program – “a bewildering variety of different code realizations”, as Liu and coauthors wrote in their soon-to-be-published conference paper – some considerably faster than others. ATL’s main reason for being is, she explains: “Because high-performance computing is so resource-intensive, you want to be able to modify or rewrite programs in an optimal form in order to accelerate We often start with a program that is the easiest to write, but it may not be the fastest way to run it, so further tweaking is still needed.

For example, suppose an image is represented by an array of 100×100 numbers, each corresponding to a pixel, and you want to get an average value for those numbers. This could be done in a two step calculation by first determining the mean of each row and then obtaining the mean of each column. ATL has an associated toolkit – what computer scientists call a “framework” – that could show how this two-step process could be converted into a faster one-step process.

“We can guarantee that this optimization is correct by using what’s called a proof assistant,” Liu says. To this end, the team’s new language builds on an existing language, Coq, which contains a proof assistant. The proof assistant, in turn, has the inherent ability to prove its assertions in a mathematically rigorous way.

Coq had another intrinsic characteristic that made it appealing to the MIT-based group: programs written in it, or adaptations of it, always terminate and cannot run indefinitely on endless loops (like this can happen with programs written in Java, for example). “We run a program to get a single answer – a number or a tensor,” Liu says. “A program that never terminates would be useless to us, but termination is something we get for free using Coq.”

The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has a long-standing interest in algorithm optimization in the context of high-performance computing. Chlipala, on the other hand, focused more on formal verification (as in the case of mathematical verification) of algorithmic optimizations. This is their first collaboration. Bernstein and Liu were brought into the company last year, and ATL is the result.

It now stands as the first, and so far only, tensor language with formally verified optimizations. Liu warns, however, that ATL is still just a prototype – albeit a promising one – which has been tested on a number of small programs. “One of our main goals, going forward, is to improve the scalability of ATL, so it can be used for the larger programs we see in the real world,” she says.

In the past, optimizations for these programs were usually done by hand, in a much more ad-hoc way, which often involved trial and error, and sometimes lots of errors. With ATL, adds Liu, “people will be able to take a much more principled approach to rewriting these programs – and do so with greater ease and greater assurance of correctness.”

Comments are closed.