18/97
The Maintainers track's program is diverse and practical, with open discussion rounds about current challenges and talks from maintainers sharing their successes and experiences.
19/97
A key focus is on sustainable software development and creating an inclusive community. How do we ensure these vital scientific tools continue to be developed and available for everyone in the future?
20/97
The track brings together Core Maintainers, regular contributors, Research Software Engineers (RSEs), and anyone who wants to learn more about the world of software maintenance.
21/97
Why is this so important? Scientific research today depends on this open-source software. The Maintainers Track recognizes this huge effort and provides the support needed to continue this vital work. #OpenSource
22/97
The Maintainer Track is mostly curated with invited maintainers, but we're open to submissions too! These are reviewed directly by the program committee (no peer review). Want to connect with fellow maintainers? Email us at [email protected].
23/97
Speaking about contacting program team, did you know you can influence our #keynote lineup? We're always looking for inspiring speakers! 🗣️
24/97
You can nominate someone for a keynote at any time, even before the CfP opens. Just email [email protected] with their name and why they'd be a great fit.
25/97
Contact details and any extra info are a huge help. You can already nominate keynoters for #EuroSciPy2026!

26/97
So, to recap, what does the final #EuroSciPy2025 program look like? As usual:

🧑‍🏫 Tutorials (Beginner & Advanced)
🎤 Main Conference Talks & Keynotes
🛠️ Maintainer Track
🖼️ Poster Session
🏃 Sprints (Free for all!)

27/97
This structure was a learning moment in our peer review! Some reviewers, new to EuroSciPy, were unfamiliar with our dedicated beginner track. As a result, some great beginner tutorials were flagged as "unfit for the audience." by some reviewers.
28/97
This highlighted a gap in our onboarding for reviewers, and we had to send out updated instructions mid-review to clarify the conference structure. It’s a process, and we're learning and improving as we go!
29/97
So here's a pro-tip for future submitters: a high-quality, (absolute) beginner tutorial has a great chance of being selected! We are always looking for great content for this track. #CfP #ProTip #EuroSciPy2026
30/97
This brings us back to the nitty-gritty of the review process. Now that you know the program structure, let's look at how we evaluated proposals…
31/97
Anonymization isn't perfect. A common challenge is when authors link to their GitHub profiles in the proposal, which can reveal their identity. We're committed to fairness, but this is a tricky problem in #OpenScience
32/97
For next year, we're considering recommending tools like https://anonymous.4open.science/. This service creates anonymized links to Git repositories, which would be a great help.
Anonymous Github

33/97
A tougher challenge is in-text deanonymization. Some authors include their names directly in the abstract or bio, or link to videos of their previous talks.
34/97
How will we handle this? One idea is to use a new @pretalx feature that allows volunteers to create a "scrubbed," anonymized version of a proposal for reviewers to see.
35/97
But this would be a monumental task for our volunteers. Manually redacting every submission is a huge amount of work, and we're not sure if it's feasible.
36/97
This is an open problem we're actively thinking about for #EuroSciPy2026. How can we best ensure a fair, blind review? We're open to #ideas from the #Community!
37/97
Want to help? Email us at [email protected] if you'd like to join our team and help with organization or volunteering next year. OR EVEN THIS YEAR!
38/97
To appreciate our new peer-review process, you need to know the old way. In the past, our program team of 5–8 people reviewed every single submission individually. 🥵
39/97
Then, we'd meet and review them all again as a group. An incredible amount of work! We knew we had to find a better, more sustainable way.
40/97
Our goal with peer review was to lighten this load. We wanted each reviewer to handle at most 20–30 proposals. But our initial calculations suggested we'd need an unrealistic number of volunteers. It seemed like the program team was destined to do it all again.
41/97
But then, the #community stepped up! We added a question to the #CfP asking if people wanted to be a reviewer, and we were delighted to see 41 people say yes! 🙌
42/97
We had so many volunteers that we even needed to run a quick pre-selection. Not everyone who signed up was able to participate in the end, so we had to re-assign some reviews, but the load per person was still a massive improvement over previous years.
43/97
We learned a lot from this first run and will refine our process next year to better ensure reviewers have prior experience with the #EuroSciPy conference format.
44/97
Our review process had 3 stages:
Stage 1: Anonymized peer review.
Stage 2: Program team pre-selects clear winners and converts some talks/tutorials to posters. Reviewers can now see other reviews.
Stage 3: Final decisions and tie-breaking. Reviewer activity is minimal here.
45/97
This brings us to the "Poster as a Fallback" option. Converting talks/tutorials to posters isn't new, but we've worked to make the process more transparent for everyone.
46/97
A couple of years back, we added an explicit question to the CfP asking if authors were willing to present a poster. This year, we refined the phrasing to be crystal clear, avoiding any ambiguity.
47/97
This "fallback" question was hidden from reviewers during Stage 1 but was crucial for the program committee in Stages 2 & 3 to build a fantastic and diverse poster session.
48/97
Why do we do this? Converting high-quality talk/tutorial proposals to posters ensures that great work still has a platform at the conference, even if talk slots are limited.
49/97
Posters provide a unique opportunity for direct, one-on-one interaction with attendees. These in-depth discussions can sometimes be even more engaging and lead to more fruitful collaborations than traditional talks.
50/97
And posters can be highly interactive! Many presenters enhance their posters with QR codes linking to live demos, GitHub repos, or supplementary materials. It's a very dynamic format.
51/97
A huge benefit for poster presenters is the Poster Spotlight Session. This is where you get to pitch your poster to the entire conference audience—something only keynotes and lightning talks get to do, as regular talks run in parallel tracks! ✨
52/97
Now, let's dive into what our peer reviewers were looking for. We used a weighted scoring system to guide the process, but it was based on qualitative feedback, not numbers.
53/97
Reviewers didn't assign scores directly. They chose from radio-button options with English sentences like "I do not recommend acceptance" or "I believe it's well-written and easy to understand." This was then converted to a weighted score on the backend.
54/97
✅ Recommendation (Weight: 2.0): This was the most important factor. Overall, is the proposal a good fit for EuroSciPy? Is it interesting, relevant, and valuable to our community?
55/97
✅ Clarity (Weight: 1.5): Is the proposal well-written and easy to understand? A clear proposal is often a sign of a clear presentation to come.
56/97
✅ Audience Fit (Weight: 0.5): Does the proposal match the expected expertise of the EuroSciPy audience for the chosen track? (More about this later 👇🏼)
57/97
✅ Originality (Weight: 1.0): Is the submitter an original author or active contributor to the project they're presenting? We prioritized original work and maintainer submissions this year. (Also, more about this later 👇🏼)
58/97
To address the 'Audience Fit' confusion (see post 27!), we gave reviewers specific guidelines on the expected Python and domain knowledge for each submission type.
59/97
🧑‍🏫 For Tutorials, we have two distinct tracks: Beginner and Advanced. Both assume little-to-no specific domain knowledge, as the focus is on learning a tool or technique. The main difference is the expected #Python expertise.
60/97
🎤 For Talks, we aim for a balance. Presenters can assume the audience has some-to-expert #Python knowledge, but should assume little-to-no specific domain knowledge. The goal is to make your work accessible to the whole #SciPy #community!
61/97
🖼️ Posters are perfect for deep, specialized scientific topics. Here, presenters can assume up to expert-level domain knowledge, making it the ideal format for presenting complex research.
62/97
It's also worth noting that proposals submitted directly to the Poster track have priority over talks/tutorials that are later converted due to limited schedule space. So if you know your work is a great fit for a poster, submit it as one from the start!
63/97
So, a pro-tip for #EuroSciPy2026: if your work is highly specialized, and you can't give a 10-min overview for a general audience, a poster is your best bet! 😉 We love seeing deep dives during the poster session, and they usually have a high acceptance rate!
64/97
A special note on our 'Education, Diversity & Outreach' track: while talks here don't require deep Python or domain knowledge, they must be relevant to the #ScientificPython community.
65/97
A key instruction for reviewers: highlight at least one strength and one area for improvement for every submission. This ensures #feedback is always constructive and helpful for the authors.
66/97
To help authors grow, we also let reviewers know that a summary of their constructive feedback might be shared with the submitter after the process was complete.
67/97
To facilitate detailed feedback, reviewers had two comment fields. The first, "Note for reviewers," was a real-time, collaborative tool. Notes were immediately visible to other reviewers to help provide objective context.
68/97
The "Note for reviewers" field was for sharing helpful, objective context. For example, if a proposal mentioned a niche technique like "Raman spectroscopy," a reviewer could share a Wikipedia link explaining what is that thing
69/97
This helped other reviewers, who might be Python experts but not spectroscopy experts, quickly grasp the domain. The goal was to level the playing field of knowledge for all reviewers, making the process fairer (A person writing this post hopes it is a word).
70/97
Crucially, these notes had to be strictly factual and non-judgmental. The aim was to provide context without influencing the evaluation or accidentally revealing the submitter's identity. The actual assessment went in the "What do you think?" field.
71/97
The "What do you think?" field was for the actual constructive feedback. This is where reviewers would share their assessment, including the required strength and area for improvement.
72/97
Crucially, this constructive feedback in the "What do you think?" field was kept private during Stage 1 of the review. Reviewers could not see each other's assessments to ensure their initial evaluations were completely independent.
73/97
This changed in Stages 2 & 3. For submissions they had already reviewed, reviewers could then see the feedback from others. This was done mostly to cover curiosity of the reviewers (“do others agree with me?”), but also to encourage some additional reviews—to break ties.
74/97
To handle practical issues, reviewers could use tags to flag proposals. Tags included 'Broken Links', 'Incorrect Category', or 'Tutorial Missing Materials', and some others.
75/97
A key tag was 'Not Anonymized'. If a reviewer spotted identifying info, they could flag it. The Program Team would then try to contact submitters to fix these issues where possible. But it highlighted such a huge problem, that we barely could do anything here :(
76/97
And of course, there was a standard but crucial rule: reviewers were instructed to skip any review where they had a personal, professional, or financial conflict of interest.
77/97
But what if a reviewer isn't an expert on a topic? We asked them to proceed anyway! #EuroSciPy is a general conference, and reviewers represent our a diverse audience. If a talk isn't clear to a non-expert reviewer, it likely won't be clear to many attendees.
78/97
This year, we prioritized originality, but what does that mean? For us, it had two sides: original contributors and original content.