35/97
But this would be a monumental task for our volunteers. Manually redacting every submission is a huge amount of work, and we're not sure if it's feasible.
36/97
This is an open problem we're actively thinking about for
#EuroSciPy2026. How can we best ensure a fair, blind review? We're open to
#ideas from the
#Community!
37/97
Want to help? Email us at
[email protected] if you'd like to join our team and help with organization or volunteering next year. OR EVEN THIS YEAR!
38/97
To appreciate our new peer-review process, you need to know the old way. In the past, our program team of 5–8 people reviewed every single submission individually. 🥵
39/97
Then, we'd meet and review them all again as a group. An incredible amount of work! We knew we had to find a better, more sustainable way.
40/97
Our goal with peer review was to lighten this load. We wanted each reviewer to handle at most 20–30 proposals. But our initial calculations suggested we'd need an unrealistic number of volunteers. It seemed like the program team was destined to do it all again.
41/97
But then, the
#community stepped up! We added a question to the
#CfP asking if people wanted to be a reviewer, and we were delighted to see 41 people say yes! 🙌
42/97
We had so many volunteers that we even needed to run a quick pre-selection. Not everyone who signed up was able to participate in the end, so we had to re-assign some reviews, but the load per person was still a massive improvement over previous years.
43/97
We learned a lot from this first run and will refine our process next year to better ensure reviewers have prior experience with the
#EuroSciPy conference format.
44/97
Our review process had 3 stages:
Stage 1: Anonymized peer review.
Stage 2: Program team pre-selects clear winners and converts some talks/tutorials to posters. Reviewers can now see other reviews.
Stage 3: Final decisions and tie-breaking. Reviewer activity is minimal here.
45/97
This brings us to the "Poster as a Fallback" option. Converting talks/tutorials to posters isn't new, but we've worked to make the process more transparent for everyone.
46/97
A couple of years back, we added an explicit question to the CfP asking if authors were willing to present a poster. This year, we refined the phrasing to be crystal clear, avoiding any ambiguity.
47/97
This "fallback" question was hidden from reviewers during Stage 1 but was crucial for the program committee in Stages 2 & 3 to build a fantastic and diverse poster session.
48/97
Why do we do this? Converting high-quality talk/tutorial proposals to posters ensures that great work still has a platform at the conference, even if talk slots are limited.
49/97
Posters provide a unique opportunity for direct, one-on-one interaction with attendees. These in-depth discussions can sometimes be even more engaging and lead to more fruitful collaborations than traditional talks.
50/97
And posters can be highly interactive! Many presenters enhance their posters with QR codes linking to live demos, GitHub repos, or supplementary materials. It's a very dynamic format.
51/97
A huge benefit for poster presenters is the Poster Spotlight Session. This is where you get to pitch your poster to the entire conference audience—something only keynotes and lightning talks get to do, as regular talks run in parallel tracks! ✨
52/97
Now, let's dive into what our peer reviewers were looking for. We used a weighted scoring system to guide the process, but it was based on qualitative feedback, not numbers.
53/97
Reviewers didn't assign scores directly. They chose from radio-button options with English sentences like "I do not recommend acceptance" or "I believe it's well-written and easy to understand." This was then converted to a weighted score on the backend.
54/97
✅ Recommendation (Weight: 2.0): This was the most important factor. Overall, is the proposal a good fit for EuroSciPy? Is it interesting, relevant, and valuable to our community?
55/97
✅ Clarity (Weight: 1.5): Is the proposal well-written and easy to understand? A clear proposal is often a sign of a clear presentation to come.
56/97
✅ Audience Fit (Weight: 0.5): Does the proposal match the expected expertise of the EuroSciPy audience for the chosen track? (More about this later 👇🏼)
57/97
✅ Originality (Weight: 1.0): Is the submitter an original author or active contributor to the project they're presenting? We prioritized original work and maintainer submissions this year. (Also, more about this later 👇🏼)
58/97
To address the 'Audience Fit' confusion (see post 27!), we gave reviewers specific guidelines on the expected Python and domain knowledge for each submission type.
59/97
🧑🏫 For Tutorials, we have two distinct tracks: Beginner and Advanced. Both assume little-to-no specific domain knowledge, as the focus is on learning a tool or technique. The main difference is the expected
#Python expertise.
60/97
🎤 For Talks, we aim for a balance. Presenters can assume the audience has some-to-expert
#Python knowledge, but should assume little-to-no specific domain knowledge. The goal is to make your work accessible to the whole
#SciPy #community!
61/97
🖼️ Posters are perfect for deep, specialized scientific topics. Here, presenters can assume up to expert-level domain knowledge, making it the ideal format for presenting complex research.
62/97
It's also worth noting that proposals submitted directly to the Poster track have priority over talks/tutorials that are later converted due to limited schedule space. So if you know your work is a great fit for a poster, submit it as one from the start!
63/97
So, a pro-tip for
#EuroSciPy2026: if your work is highly specialized, and you can't give a 10-min overview for a general audience, a poster is your best bet! 😉 We love seeing deep dives during the poster session, and they usually have a high acceptance rate!
64/97
A special note on our 'Education, Diversity & Outreach' track: while talks here don't require deep Python or domain knowledge, they must be relevant to the
#ScientificPython community.
65/97
A key instruction for reviewers: highlight at least one strength and one area for improvement for every submission. This ensures
#feedback is always constructive and helpful for the authors.
66/97
To help authors grow, we also let reviewers know that a summary of their constructive feedback might be shared with the submitter after the process was complete.
67/97
To facilitate detailed feedback, reviewers had two comment fields. The first, "Note for reviewers," was a real-time, collaborative tool. Notes were immediately visible to other reviewers to help provide objective context.
68/97
The "Note for reviewers" field was for sharing helpful, objective context. For example, if a proposal mentioned a niche technique like "Raman spectroscopy," a reviewer could share a Wikipedia link explaining what is that thing
69/97
This helped other reviewers, who might be Python experts but not spectroscopy experts, quickly grasp the domain. The goal was to level the playing field of knowledge for all reviewers, making the process fairer (A person writing this post hopes it is a word).
70/97
Crucially, these notes had to be strictly factual and non-judgmental. The aim was to provide context without influencing the evaluation or accidentally revealing the submitter's identity. The actual assessment went in the "What do you think?" field.
71/97
The "What do you think?" field was for the actual constructive feedback. This is where reviewers would share their assessment, including the required strength and area for improvement.
72/97
Crucially, this constructive feedback in the "What do you think?" field was kept private during Stage 1 of the review. Reviewers could not see each other's assessments to ensure their initial evaluations were completely independent.
73/97
This changed in Stages 2 & 3. For submissions they had already reviewed, reviewers could then see the feedback from others. This was done mostly to cover curiosity of the reviewers (“do others agree with me?”), but also to encourage some additional reviews—to break ties.
74/97
To handle practical issues, reviewers could use tags to flag proposals. Tags included 'Broken Links', 'Incorrect Category', or 'Tutorial Missing Materials', and some others.
75/97
A key tag was 'Not Anonymized'. If a reviewer spotted identifying info, they could flag it. The Program Team would then try to contact submitters to fix these issues where possible. But it highlighted such a huge problem, that we barely could do anything here :(