Abstract:
Chiplets have become a compelling approach to scaling and heterogeneous integration e.g. integrating workload-specific processors and massive bandwidth memory systems into computing systems; integrating die from multiple function-optimized process nodes into one product; integrating silicon from multiple businesses into one product.
Chiplet-based products have been produced in high volume by multiple companies using proprietary chiplet ecosystems. Recently, the community has proposed several new standards (e.g., UCIe) to facilitate integration and interoperability of any compliant chiplet. Hyperscalers (e.g., Google, Amazon) are actively designing high volume products with chiplets through these open interfaces. Other communities are exploring
the end-to-end workflow and tooling to assemble chiplet-based products. High performance computing can benefit from this trend. However, the performance, power, and thermal requirements unique to HPC, present many challenges to realizing a vision for affordable, modular HPC using this new approach.
Bio:
John Shalf is the Department Head for Computer Science at Lawrence Berkeley National Laboratory. He also formerly served as the Deputy Director for Hardware Technology on the US Department of Energy (DOE)-led Exascale Computing Project (ECP) and prior to that was CTO for the National Energy Research Scientific Computing Center (NERSC) at LBNL. He has co-authored over 100 peer-reviewed publications in
parallel computing software and HPC technology, including the widely cited report “The Landscape of Parallel Computing Research: A View from Berkeley” (with David Patterson and others). He is also the 2024-2027 distinguished lecturer for the IEEE Electronics Packaging Society. Before joining Berkeley Laboratory, John worked at the National Center for Supercomputing Applications and the Max Planck Institute for
Gravitation Physics/Albert Einstein Institute (AEI), where he co-created the Cactus Computational Toolkit.
Scientific Computing, Theory and Data