The #isc25 is over and I half-recovered from the weekend, too. Time to continue my thread summing up the #SnakemakeHackathon2025 !

To me, an important contribution was from Michael Jahn from the Charpentier Lab: A complete re-design of the workflow catalogue. Have a look: https://snakemake.github.io/snakemake-workflow-catalog/ - findability of ready-to-use workflows has greatly improved! Also, the description on how to contribute is now easy to find.

A detailed description has been published in the #researchequals collection https://www.researchequals.com/collections/hm1w-cg under https://doi.org/10.5281/zenodo.15574642

#Snakemake #ReproducibleComputing #ReproducibleResearch #OpenScience

Snakemake workflow catalog

This morning, I am travelling to the #isc25 and hit a minor bug on #researchequals. Hence, no updates in the collection.

But still a few to describe without adding the latest contributions:

For instance, this one (https://zenodo.org/records/15490064) by Filipe G. Vieira: a helper function to extract checksums from files to compare with checksums Snakemake was already able to calculate. Really handy!

#Snakemake #ReproducibleComputing #OpenScience

Extra Input Helper Functions for Snakemake

This Snakemake Hackathon Contribution adds several helper functions to infer file sizes and adds functions to calculate hash sums or return a file's content through a callable.

Zenodo

Before I continue uploading - and I do have a couple of more contributions to add to the #ResearchEquals collection - first another contribution by Johanna Elena Schmitz and Jens Zentgraf made at the #SnakemakeHackathon2025

One difficulty when dealing with a different scientific question: Do I need to re-invent the wheel (read: write a Workflow from scratch?) just to address my slightly different question?

Snakemake already allowed to incorporate "alien" workflows, even #Nextflow workflows, into desired workflows. The new contribution allows for a more dynamic contribution - with very little changes.

Check it out: https://zenodo.org/records/15489694

#Snakemake #ReproducibleComputing #OpenScience

Allowing Dynamic Load of Modules for Snakemake

Snakemake modules had to be explicitly defined and loaded at the beginning of a workflow. This limited the flexibility of workflows, particularly when dealing with complex dependency structures or when modules needed to be loaded conditionally based on runtime parameters. This contribution eases the procedure of dynamically adding 3rd party modules.

Zenodo
One of the best workshops I attended last year was one on modular thinking in research, and modular publishing with ResearchEquals.com, by @JanuschkaSchmidt
Find slides here if you have FOMO: https://www.researchequals.com/modules/kjp9-g898 Sketchnote shows my considerations for which parts of research could be a module... how about showing the derivation chain research journey on #ResearchEquals @annescheel? #openscience #modularpublishing #OpenScienceCommunity
Publishing research output continuously : The case of modular publishing (PROCess)

In this interactive workshop, I explore modular publishing as a new and alternative publishing format. Moreover, I will discuss with participants a modular approach to research, examine some benefits it can offer, and learn how to put it into practice. This are the slides for this interactive workshop designed to progress participants' knowledge and skills in academic research and publishing.

@bonfire @brembs @UlrikeHahn @jorge @open_science

@chartgerink
maybe you can discuss how #ResearchEquals can be on the same network too

Do I know anyone who is using ResearchEquals? I'd love to know what you think #ResearchEquals https://www.researchequals.com/
ResearchEquals.com

Step by step publishing of your research, with a new publishing format: Research modules.

I published my first module on R=! https://doi.org/10.53962/v3gr-nzdb

The idea came up when yet another interesting podcast published an episode in which the interviewee's audio was pretty bad compared to the hosts'.
#podcasting #ResearchEquals

Compare podcast qualities automatically to create reference quality

Automatically download podcast feeds and episode files and process them to collect data on various aspects related to publication and production, such as audio quality. These data should allow for comparing loudness, ratios of talk to music, noise levels and more across podcasts and within podcasts (or seasons). The aggregate can then be used as reference for new recordings.

Scientists and researchers in my timeline, do you use micropublishing platforms like octopus.ac or researchequals.com to publish your research? If yes, I want to talk to you for a story for Nature.

#scientists #researchers #scientificPublishing #mpu #micropublishing #ResearchEquals
#journoRequest