成人VR视频

The UK can’t continue its shambolic stop-go approach to supercomputing

The exascale computer’s cancellation underlines the need for a long-term vision that transcends political whim, say Peter Coveney and Roger Highfield

八月 9, 2024
A ‘system error’ message on a computer screen
Source: Yevhenii Dubinko/iStock

We have just had a powerful reminder that, while some artificial intelligences (AIs) can create nonsense in the form of what are called hallucinations, governments have the capacity to hallucinate too, and in novel ways, when it comes to AI.

When the UK government announced last March that ?900 million would be invested in “”, many welcomed how the home of Lovelace, Babbage, Turing and the Colossus had ambitions to rekindle its status as a computing pioneer with the latest and largest kind of supercomputer, capable of a “quintillion” floating-point computations per second.

But although there was delight that an ?800 million supercomputer in Edinburgh would put the UK near the forefront of a technology revolution, it was unclear where all the money was coming from. Last week, the Labour government announced that it would?shelve these underfunded plans. As we suspected, they were indeed a hallucination.

The debacle is a reminder that the way the UK planned to surf the wave of computing developments has been a stop-go shambles. By comparison, the US, Europe, Germany, China, Japan, Singapore and Australia (the last two working together) have spent more than a decade diligently planning for the exascale supercomputer revolution.

Another kind of government hallucination is inspired by the engines of the AI revolution, powerful microchips called GPUs or graphics processing units. First developed for gaming computer graphics, GPUs are capable of processing swathes of data in parallel, so a giant task is broken down into manageable pieces that can be tackled simultaneously.

In particular, GPUs are fundamental for the energy-hungry task of training large language models (LLMs). But the UK has not grown the capacity to exploit this kind of AI and most of our researchers still depend on relatively few GPUs, not the tens of thousands in an exascale machine.

Crucially, but far too easily forgotten, GPUs are also fundamental for traditional supercomputer users, who complement theory and experiment with simulations. Some can already run vast models on more than 40,000 GPUs to solve equations that predict the weather, the climate or, in our case, the way that blood surges around the body, as part of the emerging science of human “healthcasts”. An exascale machine offers a way to accelerate these efforts too, and often features AI to enhance these computations.

However, the last government hallucinated that GPUs were somehow synonymous with AI. In a bizarre move, policymakers held classic high-performance computing researchers at arm’s length from Isambard-AI, the University of Bristol facility based on thousands of GPUs and billed as , even though it would also be effective for conventional modelling and simulation.

Restricting our next generation supercomputer to AI is a bit like insisting that the James Webb Space Telescope only be used by AI researchers who harness observational datasets rather than astronomers trying to understand cosmic mysteries such as dark matter.

There is another hallucination, one that is disturbingly widespread: AI can do lots of things, so it can easily “do science”, too. But . Forms of AI that are fully compatible with the scientific method need to be developed as a priority.

What now? There will be siren calls from industry giants to use cloud computing, rather than depend on a single exascale machine that burns megawatts of power. But there is no way cloud computing can compete with “big iron”, just as the muscle power of thousands of cats does not translate into the speed of a single cheetah. Cloud computers are nowhere near powerful enough, cost a fortune when operated by US tech giants and risk leaking confidential national data.

The UK has recently and very belatedly joined EuroHPC, a multibillion-euro high-performance computing programme . It features that can manage hundreds of petaflops (“” being floating-point operations per second) and should have an exascale machine next year, though in the UK it is also hoped that it?will unlock significant opportunities through integrating quantum devices with conventional supercomputers.

We have had plenty of tardy reviews of UK computing infrastructure and . What is needed, however, is a national vision that includes hardware, software and people, and?which has ambition stretching over decades, free of the hallucinatory whims of evanescent ministers in search of shiny new things with which to make headlines. It must not be confined to AI but should acknowledge the convergence with conventional high-performance computing, bringing them together to become more than the sum of their parts.

It is time, for instance, to lay down systematic plans and proper funding for a long-term move to the zettascale and beyond, including a recognition of the vital importance of educating, training, recruiting and retaining people with the right know-how. Without this, the belief that the UK could compete in, let alone dominate, a game of immense global and strategic importance is yet another mirage.

is director of the Centre for Computational Science at UCL and is science director of the Science Museum. They are co-authors of .

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT