GigaScale #FluidX3D #CFD Simulations from Communication and IO Perspective
https://www.youtube.com/watch?v=uAIkpcX5EFc&list=PLA-vfTt7YHI2HEFrpzPhhQ8PhiztKhHU8
Scott Atchley, who co-keynoted #ISC25, posted a really meaningful response to my ISC25 recap blog post on LinkedIn (https://www.linkedin.com/posts/scottatchley_isc25-olcf-frontier-activity-7345786995765395457-lGoq). He specifically offered additional perspective on the 20 MW exascale milestone and the pitfalls of Ozaki. It's short but very valuable context.
I always enjoy reading Glenn K. Lockwood's conference recaps with #ISC25 recap being his latest. First, I was honored that AMD's Mark Papermaster invited me to share some science highlights from OLCF's Frontier. I shared that Frontier has grown slightly in the last year with the integration of the test and development system into the full system. My science highlights included: • GE Aerospace's efforts to reduce noise generated by their RISE engine that will allow GE Aero to bring it to market sooner, • NASA's work to understand how to use retro-propulsion to land humans and their gear on Mars, • Researchers refining the phase diagram for carbon by identifying the narrow region in pressure and temperature that would allow body-centric cubic (BC8) carbon to exist. This material is expected to be 30% harder than diamond, and • Efforts to understand how drug candidates interact with proteins. Unlike AI efforts such as AlphaFold that approximate protein docking, this effort uses Molecular Dynamics to got the two items close together then it switches to quantum mechanics to get an exact docking. This application actually used over 1 exaflop (1 EF) of full precision (FP64) on Frontier. I also highlighted what Frontier's replacement, Discovery, will need to support modeling/simulation as well as artificial intelligence. It will need bandwidth everywhere from processors to scale-up bandwidth between processors to scale-out bandwidth across the whole system in addition to lots of high-precision and low-precision FLOPS. I will reply with more comments. 🧵 #OLCF #Frontier
For most that missed the #ISC25 Flux Framework Tutorial, we just posted our slides online:
https://github.com/flux-framework/Tutorials/blob/master/2025-ISC-AWS/Flux-ISC-2025.pdf
Thank you to those that attended, and see you next time! 👋
Our reflections from #ISC25 are live.
From standout sessions to big-picture trends and community moments (including our Roco casting a few spells), we’ve pulled together the key takeaways from Hamburg in our latest blog by Muneeb Khan, Senior HPC Managed Services Specialist.
📬 Read here: https://www.redoakconsulting.co.uk/blog/postcards-from-isc-2025/
#ISC#HPC #Supercomputing
The #isc25 is over and I half-recovered from the weekend, too. Time to continue my thread summing up the #SnakemakeHackathon2025 !
To me, an important contribution was from Michael Jahn from the Charpentier Lab: A complete re-design of the workflow catalogue. Have a look: https://snakemake.github.io/snakemake-workflow-catalog/ - findability of ready-to-use workflows has greatly improved! Also, the description on how to contribute is now easy to find.
A detailed description has been published in the #researchequals collection https://www.researchequals.com/collections/hm1w-cg under https://doi.org/10.5281/zenodo.15574642
#Snakemake #ReproducibleComputing #ReproducibleResearch #OpenScience
Wir sind noch ganz im Zeichen der #ISC25 und haben diesen Artikel über die Entwicklung von Rechenkraft gefunden: Selbst mit Beschleunigern und anderen Acceleratoren ist das #MoorschesGesetz nicht mehr zu schaffen. Bis vor einigen Jahren konnte Rechenkraft noch in etwa zwei Jahren verdoppelt werden. Die Architektur fürs Supercomputing muss auch für eine nachhaltige Energienutzung komplexer, heterogener geplant werden. https://www.nextplatform.com/2025/06/10/top500-supers-even-accelerators-cant-bend-performance-up-to-the-moores-law-line/