Advanced Computing is Foundational to Scientific Progress
Congress' proposal to decrease funding to the Advanced Scientific Computing Research (ASCR) program at a critical moment for emerging technology competition is not a good idea.
If you’re like most Americans, it is unlikely that you spend much time thinking about high-performance computing (HPC) or its importance for the newest technologies and scientific advances. It’s even less likely that you’ve heard of the Advanced Scientific Computing Research (ASCR), a program within the Department of Energy’s Office of Science. But the ability of this office to fulfill their mission, advancing applied mathematics and computing capabilities in the United States, is critical for United States’ ability to achieve scientific and technological progress, meet its energy and climate goals, and improve national security.
And now they are sounding the alarm. A recent report from the ASCR advisory board notes that we are at an inflection point. The successful culmination of an exascale computing effort has left prospects for future funding and retention of talent within the Department of Energy uncertain, at a time when the United States leadership in advanced computing is no longer undisputed.
The current House and Senate proposals to cut ASCR funding in the FY24 Congressional Budget dangerously undermine the goals of strengthening the United States’ science, technology, and innovation at a critical moment of increased geopolitical competition.

While budget cuts will be required due to negotiated spending caps for 2024, there are a number of areas such as high energy physics (sorry, high energy physicists) which are far less foundational to scientific and technological progress than advanced computing.
Progress in high-performance computing, building the next generation of supercomputers, will underpin the advancement of artificial intelligence, scientific and energy research, and other projects relevant to economic and national security. Advancement in the field is far too important to be left to industry research, and international competition, particularly from China, is getting stiff.
Congress should robustly fund ASCR, including meeting the requested FY24 budget increase, articulate a decadal plan for funding and strategic vision for the next generation of supercomputers, and make efforts to retain and attract talented researchers to develop the next generation of high-performance computers at the DOE and in the National Labs.
Some Background
High-performance computing (HPC) or advanced computing uses powerful computers called supercomputers or clusters of computers working together to achieve much greater performance than a regular phone or computer. By performance, we generally mean number of operations (calculations) per second. The standard measurement of computing performance is given in FLOPS, floating-point operations per second.
What makes supercomputers so super? To start, they are much, much faster than your regular old laptop. A good laptop can do about 300 million floating-point operations per second. On the other hand, the Frontier, the newest exascale supercomputer hosted at Oak Ridge National Labs and developed with the help of ASCR, has a peak performance of about 1.6 exaFLOPS. That’s 10^18, or 1,000,000,000,000,000,000, or a billion billion operations per second.
This is mind-bogglingly fast. To really drive the point home, how long do you think it would take your computer to do what it takes one second for Frontier to do? Take 1.6x10^18/300x10^6, and you get that it’d take my regular old laptop about 170 years to do what Frontier can do in a second.
But why do we care? What good is it having a computer that can do things really, really fast? Well, a lot. Here’s a picture from the beginning of the ASCR report showing some of the research areas which benefit from high-performance computing:

From medicine to energy to basic sciences, nearly every area of technology is touched by advanced computing. The most advanced weather forecasts rely on it. And as IBM says, artificial intelligence and supercomputing have become synonymous. In today’s day and age, faster computers mean better science. HPC is an enabling technology- it underlies nearly all of our modern advances. It is not a given that the fastest computers will yield the best science. But it is almost certain that you won’t have the best science without them.
Therefore, it is imperative that the United States invest heavily in HPC research today, so that future advances in advanced computing continue benefit the United States scientifically and economically.
The ASCR Advisory Report
The Exascale Computing Project (ECP) is nearing a successful conclusion. The United States has one operational exascale computer, Frontier, with two more, Aurora and El Capitan expected to be completed in 2023. So what’s the cause for alarm? Why is it important the Advanced Scientific Computing Research program remains adequately funded?
The end of the ECP marks a turning point. And it is critical that the United States stay competitive in the field of advanced computing; the rate of our scientific and technological progress depend upon it.
The ASCR report identifies a number of challenges to this progress. From the key findings:
The end of the Exascale Computing Project (ECP) is both a success and a huge risk. The project delivered great capabilities, both human and technical. Now, however, DOE is highly vulnerable to losing the knowledge and skills of trained staff as future funding is unclear.
U.S., DOE, and ASCR leadership in key areas is under threat. This situation is due to increased international competition (e.g., it is reported that China may deploy ten exascale machines by 2025) and geopolitical changes (e.g., a less cooperative and more competitive relationship with China), as well as increased market pressures in the United States that draw talent, capital, and attention toward near-term commercial objectives.
International competition, large companies with different incentives siphoning talent, and uncertain prospects for future funding are identified as three major challenges for our advanced computing future.
International Competition
On the international front, China is the main competitor, with their plans to deploy ten exascale machines by 2025 compared to the three planned by the United States. Top500 is a website which compiles a regular list of the top 500 most powerful HPC machines, and their data attests to the progress made by China in recent years in the field.

The reason the number of powerful machines matter is one of resource allocation- time to run codes on these machines is in high demand. Scientists across the National Labs and academia as well as industry players all want time on the best machines. These “compute hours” are a coveted resource among computational researchers. Therefore having a greater number powerful machines available to the research and business communities allows more scientific simulations to be run simultaneously, leading (in theory) to a greater rate of scientific progress.
As China has surpassed the United States in recent years in the number of high profile scientific publications, there is a justified anxiety about the respective rates of scientific progress between the two nations. Congress then should have plenty of reason to ensure HPC does not become a source of slowing scientific progress in the US.
The report provides a roadmap for how falling behind internationally would be harmful, as well as a historical example of this process in action:
First, foreign leadership in supercomputing is likely to translate into leadership in many other areas of importance to the United States, from materials design to defense technologies.
Second, if foreign vendors start selling supercomputers that are cheaper and/ or more effective than those of U.S. vendors, the U.S. advanced computing industry will suffer, making it harder for DOE labs and other U.S. entities to acquire the most powerful systems.
Third, if the fastest computers are overseas rather than in the United States, the best scientists are likely to direct their efforts to developing their applications for those computers, with the result that vital expertise will spread more rapidly to our competitors and that the best codes will run less well, or not at all, on U.S. supercomputers. (As a historical example, we note that the Japanese Earth Simulator, the fastest supercomputer in the world from 2002 to 2004, attracted many U.S. teams, who developed there rather than on U.S. systems and, furthermore, were required to provide their code to the Japanese in return.)
Fourth, a decline in the relative performance of U.S. systems will make retention and recruiting of top talent more difficult.
Clearly there is tremendous risk, scientifically, economically, and to national security, in letting our supercompute abilities wither and ceding the field to international players, whether friend or competitor.
The present situation is not all bad. China’s current lead in HPC may not be as drastic as it seems. As noted in an article from Data Center Dynamics, around half of the $3.2 billion DOE has allocated for HPC research has gone towards software development, an area where the United States likely has the greatest capabilities currently. This would mirror the situation in advanced semiconductor manufacturing, where United States companies at present dominate the market for Electronic Design Automation (EDA) software.
The Big Companies: Hyperscalers
An additional challenge to United States leadership in advanced computing is the big technology companies, specifically their attractiveness to the most talented researchers. While a source of major innovation, particularly in advanced computing software, the needs of these large companies are sometimes in tension with those of the scientific community.
The ASCR report frames the issue this way:
The computing industry is now dominated by a small set of cloud companies. Facebook (now Meta), Amazon Web Services (AWS), Apple, Netflix, and Google (now Alphabet) (collectively referred to as FAANG), together with Microsoft and their Chinese counterparts Baidu, Alibaba, and Tencent (BAT), are called hyperscalers….
Naturally, the computing marketplace focuses on the hyperscalers' needs, a situation that does not bode well for the much smaller science and engineering communities that have historically driven HPC developments. For example, increasing emphasis is given to low-precision arithmetic operations suitable for AI computations, rather than to the higher precision generally needed for science and engineering.
The large hyperscalers build their machines to be responsive and optimize to meet elastic demands- high volume of users at a given moment, less at a different time. Scientific machines, in contrast, are designed to minimize latency (downtime) and maximize throughput- getting done as much as possible.
These differences mean that while the hyperscalers are applying tremendous resources and achieving incredible breakthroughs in various advanced computing areas, this is not an adequate substitute for scientific work supported by ASCR:
These differences mean that, in general, it is not feasible for DOE to either outsource its HPC workload to the cloud or order a cloud data center instead of an HPC machine. This is not to imply that the HPC community cannot benefit from collaborating with the hyperscalers and adopting technology from them.
A 2020 report from the American Association for the Advancement of Science (AAAS) titled “The Perils of Complacency: America at a Tipping Point in Science & Engineering” notes how two thirds of research and development (R&D) in the United States is now funded by companies, a reversal of the historical trend. The report further notes that within that industry R&D, increasing focus is placed on the development of marketable products, with less effort going into fundamental research.
The salaries on offer, excitement at the potential for innovation, and sheer budget size for R&D makes the hyperscalers an incredibly attractive place for talented young professionals to work. DOE and the National Labs may struggle to attract and retain the best researchers when there are such lucrative opportunities elsewhere.
Uncertain Funding and Recommendations for ASCR
Due to the critical role high-performance computing plays in the United States economic and scientific development, ASCR is critical to the country’s security, economic and scientific endeavors. They must be funded adequately and efforts made to attract and retain top talent in field.
With the winding down of the Exascale Computing Project, uncertainty about the future is “generating much anxiety” within ASCR. The vast sums of money available in industry along with the lack of a clear vision for what’s next for HPC beyond exascale machines are major challenges for ASCR and consequently American leadership in advanced computing more generally.
In addition to the actual budget cut proposed by Congress in the FY24 budget, the Advisory Board report notes that funding is declining in real terms due to diversions towards efforts specifically geared towards machine learning and artificial intelligence as well as quantum computing and quantum information science work which may well be beneficial but is far from a proven technology at the present time.
My recommendations coincide with the four the ASCR Advisory Board lays out in the report. The first deals with four technology areas (modelling and simulation, AI, leading-edge computing architectures, and future architectures) where ASCR should seek to maintain and build upon its leadership in.
I quote the remaining three report recommendations in part or in full:
ASCR leadership should work with the DOE labs to develop a decadal-plus post-exascale vision and strategy that builds on ASCR’s strengths in mathematics and computing research working together with DOE’s world-class facilities. The focus should be on providing sustained investments to preserve and extend ASCR’s current leadership in CS&E research and multidisciplinary team science while also establishing new application areas in emerging topics such as digital twins and AI for science, energy, and security, together with addressing daunting computing challenges as Moore’s law fades.
Next,
ASCR needs to articulate a vision, associated goals, and milestones for international collaboration focused on post-exascale computing and networking. ASCR should work with the labs to identify critical research and facilities opportunities that may require international partnership to create and sustain international leadership, either because of the scale of investments needed or because of the unique capabilities that international partnerships can provide. ASCR should work to establish trust relationships with strategic partners, evangelize and socialize these efforts, define agreement structures (perhaps beyond the traditional memorandum of understanding (MOU)), and provide resources to develop flexible multiparty collaborations.
Finally,
ASCR needs to invest in long-term forward-looking co-design research in advanced computer architecture and system concepts to identify potential solutions for sustaining 11 continued scientific productivity increases for future scientific computing systems. Such a co-design effort will require substantially increased government investment in basic research and development. In addition, DOE should fund the building of real hardware and software prototypes at scale to test new ideas using custom silicon and associated software.
Computing is too important an area to allow the United States to fall behind. Congress should fully meet ASCR’s funding request for FY24, and require a report from ASCR outlining strategies to meet each of the four recommendations provided above.
Budget Considerations
Because Congressional spending is limited due to negotiated budget caps for FY24, I will humbly suggest two areas where funding could be reduced slightly to meet the requested increase for ASCR: high energy physics, which I mentioned earlier, and nuclear weapons activities including a revitalized plutonium pit production effort underway since 2006.
High energy physics, with the increasingly large accelerators required to obtain new results, is incredibly expensive and new facilities have diminishing returns over time: the scientific advances possible from the new machine happen, and then become fewer and farther between.
And while I love fundamental science, the theories of particle physics have become increasingly abstract and difficult to prove or disprove, and the practical benefits to society from costly accelerator experiments are coincidental when they occur. Advanced computing underlies every element of modern science. This seems to me an area ripe for a tradeoff.
The next area where I suggest a budget tradeoff is funding for the National Nuclear Security Administration (NNSA), also housed within the Department of Energy.
The proposed budget for NNSA is over double that of the DOE Office of Science, and both the House and Senate have proposed a generous increase in the “Weapons Activities” category:

Plutonium pits (often just called “pits”) are essentially the triggers for modern nuclear warheads. It is uncertain how long these pits last. Some estimates say they may work less well after as little as 80 years, but the best current studies indicated no discernable degradation in the plutonium until a simulated 250 years. Most pits in use in the United States arsenal are currently 40 years old. The ability for the United States to produce new plutonium pits at a rate of 80 per year is Congress’ goal. However, as a deep dive from the Bulletin of Atomic Scientists notes:
Congress’ current concern therefore is driven in good part by the fact that NNSA has not demonstrated that it can reliably produce pits on any scale. If NNSA could demonstrate reliable production of even 10 to 20 pits annually, that reliable production line could be the basis for constructing additional production lines, if they become necessary.
These pit production efforts, especially aiming for the 80-pit-per-year threshold have the potential to accelerate a nuclear arms race, and are incredibly costly:
The NNSA’s cost estimate for using the Savannah River facility to manufacture warhead pits has already risen from $3.6 billion in 2017 for an 80 pit-per-year production capacity to $11.1 billion for a 50 pit-per-year capacity in 2023.
Rather than costly efforts aimed at ramping up production of a nuclear weapon element that will likely not need to be replaced for at least 100 years, Congress could direct a fraction of that money to empowering ASCR to maintain United States’ leadership in advanced scientific computing.
Conclusion
Advanced computing, while not frequently recognized as such, is the workhorse for modern scientific and technological development. It would be a grave mistake for the United States to leave this foundational technology, and the Advanced Scientific Computing Research program (ASCR), the office championing it, underfunded at this critical moment of global technological competition.
The pivot of some AI research to open source (e.g. LLaMA) suggests the barriers to entry to compute will have parallels to other distributed hardware projects (e.g. folding@home). Compute doesn't always need localized advanced hardware, but access (e.g thin clients) to retain the most talent.