DeepMind’s AlphaCode attains ‘average’ rating in programming competition

Programming
computing

Credit: Pixabay/CC0 Public Domain

A team of researchers at DeepMind has tackled another difficult task—generating computer code to satisfy a natural language request. In their paper published in the journal Science, the group describes the approach they used in creating their AI app and outlines how well it did when pitted against human programmers. J. Zico Kolter with Carnegie Mellon University has published a Perspective piece in the same journal issue outlining many of the issues involved in getting a computer to generate computer code and the work done by the team in London.

Generating computer code to create a program or application that will carry out a desired purpose varies from the simple to the extremely complex. Humans have been streamlining the approach for decades in an attempt to allow individuals or groups to create more sophisticated programs in a reasonably efficient manner. Much more recently, computer scientists have begun contemplating the idea of ​​having computers do the programming—an approach that represents a mammoth shift in the kinds of things that computers can do.

If a computer could be made to create code on the fly, any user could ask their computer to create programs specifically for them to satisfy their unique needs—and to do it quickly and at no cost. The catch is that scientists must first figure out how to achieve that goal. The current hope is that deep-learning neural networks can do the job. If such a network can be taught how programming works, and how to achieve desired results using the code it writes, it should be able to improve its own abilities over time by reviewing its own results and the results of others. And that, in a nutshell, is what the team at DeepMind is doing.

The team created a system called AlphaCode that is able to “listen” to a human voice (or read it on a computer screen) telling it what it needs a computer program to do and then to “think” about the prompt. Next, it must translate the prompt into a plan of action. And once it has a plan of action, it must convert the user’s prompt into a series of steps that can be turned into computer code. This whole process will undoubtedly sound familiar to computer scientists, because it forms the basis of systems programming and has for many years.

In this new effort, the researchers did not attempt to teach their system how to code by showing it how the code structure works—instead, they simply trained it with lots of code and allowed it to learn by observing. The approach appears to be working, AlphaCode scored an “average” rating when entered into a programming competition, an impressive beginning, considering the project is still in its early stages.

more information:
Yujia Li et al, Competition-level code generation with AlphaCode, Science (2022). DOI: 10.1126/science.abq1158. www.science.org/doi/10.1126/science.abq1158

J. Zico Kolter, AlphaCode and “data-driven” programming, Science (2022). DOI: 10.1126/science. add8258

© 2022 Science X Network

Citation: DeepMind’s AlphaCode attains ‘average’ rating in programming competition (2022, December 9) retrieved 12 December 2022 from https://techxplore.com/news/2022-12-deepmind-alphacode-average-competition.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.