Lab Home | Phone | Search | ||||||||
![]() |
|
|||||||
![]() |
![]() |
![]() |
As AI becomes a powerful tool in scientific computing, two challenges emerge: (1) efficiently scaling models on modern hardware, and (2) ensuring these models can learn complex physics with fidelity and generalizability. In this talk, I will discuss our work at this intersection. First, I will introduce Fused3S, a GPU-optimized algorithm for sparse attention—the backbone of graph neural networks and transformers. By fusing matrix operations, Fused3S reduces data movement and maximizes tensor core utilization, achieving state-of-the-art performance across diverse workloads. Then, I will present BubbleML and Bubbleformer, our efforts to model boiling dynamics with ML. Boiling is fundamental to energy, aerospace, and nuclear applications, yet remains difficult to model due to the interplay of turbulence, phase change, and nucleation. By combining large-scale simulation datasets with transformer architectures, Bubbleformer forecasts boiling dynamics across fluids, geometries, and operating conditions. Together, these efforts illustrate how scaling AI—both computationally and scientifically—can accelerate discovery across disciplines. I will conclude with open challenges and opportunities in AI-driven scientific computing, from hardware-aware models to foundation models for physics. Bio: Aparna Chandramowlishwaran is an Associate Professor at the University of California, Irvine, in the Department of Electrical Engineering and Computer Science. She received her Ph.D. in Computational Science and Engineering from Georgia Tech in 2013 and was a research scientist at MIT prior to joining UCI as an Assistant Professor in 2015. Her research lab— HPC Forge—aims at advancing computational science using high-performance computing and machine learning. She currently serves as the associate editor of the ACM Transactions on AI for Science. Teams: Join the meeting now Host: Patrick Diehl (CCS-7) |