You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: index.html
+23-1Lines changed: 23 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,11 @@ <h2>Overview</h2>
53
53
<td><ahref="#fy258">LCI: a Lightweight Communication Interface for Asynchronous Multithreaded Communication</a></td>
54
54
<td>05/27/2025</td>
55
55
</tr>
56
+
<tr>
57
+
<td>Aparna Chandramowlishwaran</td>
58
+
<td><ahref="#fy261">Scaling AI for Scientific Discovery: Faster Kernels, Efficient Models, Better Physics</a></td>
59
+
<td>04/15/2025</td>
60
+
</tr>
56
61
<tr>
57
62
<td>Richard Berger</td>
58
63
<td><ahref="#fy260">Driving Continuous Integration and Developer Workflows with Spack</a></td>
@@ -135,6 +140,24 @@ <h3>LCI: a Lightweight Communication Interface for Asynchronous Multithreaded Co
135
140
Jiakun Yan is a fifth-year Ph.D. student at UIUC, advised by Prof. Marc Snir. His research involves exploring better communication library designs for highly dynamic/irregular programming systems and applications. He is the main contributor to the Lightweight Communication Interface (LCI) Project and the HPX LCI parcelport.
136
141
</p>
137
142
143
+
144
+
<divid="fy261"></div>
145
+
<h3>Scaling AI for Scientific Discovery: Faster Kernels, Efficient Models, Better Physics</h3>
146
+
Speaker: Aparna Chandramowlishwaran<br>
147
+
University of California, Irvine<br><br>
148
+
149
+
Abstract:
150
+
<p>
151
+
As AI becomes a powerful tool in scientific computing, two challenges emerge: (1) efficiently scaling models on modern hardware, and (2) ensuring these models can learn complex physics with fidelity and generalizability. In this talk, I will discuss our work at this intersection.
152
+
First, I will introduce Fused3S, a GPU-optimized algorithm for sparse attention—the backbone of graph neural networks and transformers. By fusing matrix operations, Fused3S reduces data movement and maximizes tensor core utilization, achieving state-of-the-art performance across diverse workloads. Then, I will present BubbleML and Bubbleformer, our efforts to model boiling dynamics with ML. Boiling is fundamental to energy, aerospace, and nuclear applications, yet remains difficult to model due to the interplay of turbulence, phase change, and nucleation. By combining large-scale simulation datasets with transformer architectures, Bubbleformer forecasts boiling dynamics across fluids, geometries, and operating conditions. Together, these efforts illustrate how scaling AI—both computationally and scientifically—can accelerate discovery across disciplines. I will conclude with open challenges and opportunities in AI-driven scientific computing, from hardware-aware models to foundation models for physics.
153
+
</p>
154
+
155
+
Bio:
156
+
<p>
157
+
Aparna Chandramowlishwaran is an Associate Professor at the University of California, Irvine, in the Department of Electrical Engineering and Computer Science. She received her Ph.D. in Computational Science and Engineering from Georgia Tech in 2013 and was a research scientist at MIT prior to joining UCI as an Assistant Professor in 2015. Her research lab— HPC Forge—aims at advancing computational science using high-performance computing and machine learning. She currently serves as the associate editor of the ACM Transactions on AI for Science.
158
+
</p>
159
+
160
+
138
161
<divid="fy260"></div>
139
162
<h3>Driving Continuous Integration and Developer Workflows with Spack</h3>
140
163
Speaker: Richard Berger<br>
@@ -150,7 +173,6 @@ <h3>Driving Continuous Integration and Developer Workflows with Spack</h3>
150
173
Richard is a research software engineer in the Applied Computer Science Group (CCS-7) at Los Alamos National Laboratory (LANL) with a background in Mechatronics, high-performance computing, and software engineering. He is currently contributing to the core development of LAMMPS, FleCSI and working on DevOps for multiple other LANL projects.
151
174
</p>
152
175
153
-
154
176
<divid="fy259"></div>
155
177
<h3>Introducing the Lamellar Runtime: A Modern Approach to High-Performance Computing</h3>
0 commit comments