Follow Datanami:
August 7, 2019

Upcoming NSF Cyberinfrastructure Projects to Support ‘Long-Tail’ Users, AI and Big Data

August 7, 2019 — The National Science Foundation is well-positioned to support national priorities, as new NSF-funded HPC systems to come online in the upcoming year promise to democratize advanced computing and take advantage of new technologies, according to Jim Kurose, assistant director of Computer and Information Science and Engineering (CISE) at NSF. Kurose was speaking at the final keynote presentation of the PEARC19 conference on Aug. 1.

“If you look at these areas that are stated national priorities, you see that CISE and computing are generally at the center” of them, he said. “Computing plays … such a central role in all of these priority areas” such as AI, big data and cybersecurity. “These are the kinds of things that we do in this community.”

PEARC19, held in Chicago last week (July 28-Aug. 1), explored current practice and experience in advanced

research computing including modeling, simulation and data-intensive computing. The primary focus this year was on machine learning and artificial intelligence. The PEARC organization coordinates the PEARC conference series to provide a forum for discussing challenges, opportunities and solutions among the broad range of participants in the research computing community.

“NSF is a very bottom-up institution,” Kurose said, and the HPC community “has been really vocal about providing input … When I look at the tea leaves inside NSF, I see a focus on computation at large-scale facilities … I think that’s going to be incredibly important.”

Manish Parashar, director of NSF’s Office of Advanced Cyberinfrastructure, noted that the CI discipline “cuts across all parts of NSF’s mission, but also its priorities … [we] can extrapolate beyond that and say that it’s even central to national priorities.”

“Increasingly, we are realizing that science only happens when all [the] pieces come together,” he added. “How do you combine the data, software, systems, networking and people?” The technology and scientific user community are changing rapidly, he noted, and NSF and the HPC community need to “continue thinking about what a cyberinfrastructure should look like and how … we evolve it” with innovations such as cloud computing and novel architectures balanced by computational stability.

Parashar introduced three new NSF-funded HPC systems, slated to come online in the coming year.

Bridges-2: High-Performance AI, HPC and Big Data

Nick Nystrom, chief scientist at the Pittsburgh Supercomputing Center (PSC), described their $10-million system, Bridges-2.

Bridges-2’s high-performance AI converged with high performance computing emphasizes AI as a service, with ease of use, familiar software, interactivity and productivity as central goals, Nystrom said. A heterogeneous machine, Bridges-2 will feature Intel Ice Lake CPUs, advanced GPUs, compute nodes with various amounts of memory (256 GB, 512 GB and 4 TB RAM) and cloud interoperability to facilitate a variety of workflows. Built in collaboration with HPE, the system will contain new technology such as an all-flash array for very rapid data access.

PSC plans to accept an initial round of proposals via XSEDE’s allocations process in June to July 2020, with early users beginning work in August and production operations in October.

Expanse: A System Optimized for “Long-Tail” Users

Shawn Strande, deputy director of the San Diego Supercomputer Center (SDSC), described their new system, Expanse. The system, he said, is focused on “the long tail of science … It’s not a box in a machine room that people log into and do stuff. [It’s] connected with other things” in a way that addresses a broad range of computation and data analytics needs.

The $10-million acquisition will be optimized for small- to mid-scale jobs and machine learning. Dell is the primary HPC vendor and Aeon Computing will provide the storage. Expanse will feature 728 standard compute nodes, 52 GPU nodes, four large-memory nodes, 12 PB of performance storage, 7 PB of Ceph object storage, interactive containers and cloud burst capability with a direct connection to Amazon Web Services. The system will be cloud agnostic, supporting all of the major cloud providers.

Expanse will begin its production phase under XSEDE allocation in the second quarter of 2020.

Ookami: A Testbed for ARM Architecture

John Towns, principal investigator of XSEDE, introduced Stony Brook University’s Ookami system on behalf of Robert Harrison, the new system’s PI as well as professor and director, Department of Applied Mathematics & Statistics at Stony Brook. The $5 million Ookami will be a testbed project in collaboration with Riken CCS in Japan, featuring ARM architecture via the A64fx processor. Its 48 compute and four assistant cores will have 32 GB of RAM, which is sufficient to serve some 86 percent of XSEDE’s historic workload.

As a testbed project, Ookami will be phased into XSEDE service, with two years of allocation managed by Stony Brook beginning in late 2020, followed by two years of XSEDE allocation.

XSEDE allocation processes and requirements can be found at xsede.org. The awards can be found at:

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1928147

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1928224

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1927880


Source: Ken Chiacchia, Pittsburg Supercomputing Center/XSEDE

Datanami