1/17
#SIPS2023 - and my first SIPS - is over, so it's time to go through my notes and do some reflection and recollection. I wrote most of this on the train back. Some of these notes will be quotes that stuck with me the most or insights that (I think?) I've taken away.

This is going to be a longer thread, so I'm hiding most of the text behind content warnings, so it doesn't mess up your timeline (helpful advice from @doctorow), hope it works...

Here goes:

@improvingpsych is not the society for open science! It is a good thing that these communities overlap, but not all ideas for improving psych will come from open science, some may harm it. In the same way, not all the things that may improve psych will be valuable for the open science movement as a whole.
We can't save the world on our own but all the little things we do add up. He presented 8 practical steps he does to improve psych. The ones I personally found most compelling were:
- The 1st year of a PhD may be a good opportunity to get a 1st stage reg. report out. Some "friendly journals" publish reports that have successfully passed stage 1 & 2 on PCI RR (https://rr.peercommunityin.org/) without further review!
PCI Registered Reports

Peer Community in Registered Reports

- If you're not in a position to purge for-profit publishers from your life completely, try to start with not reviewing or citing from them as much.
- On achieving a more rigorous psych. science based on substantive theories: For the phenomenon that you're interested in: a) consider the best general theory that might apply to the problem, b) test it severely
- There's no such thing as a minimal interesting _standardised_ effect size, so try to specify the minimal interesting _raw_ ES.
Disunity can be a problem but also a sign of a diversified, mature science, i.e. functional and dysfunctional disunity. E.g. using different names for the same thing (or v.v., jingle/jangle etc.) is only a problem if different subfields don't communicate. I hate huge, 1000s-of-ppl conferences as much as anyone but I've sometimes found sessions from a very different subfield provided a surprisingly relatable perspective on my own interests.
This relates to @matherion's great hackathon on construct taxonomies. We have to put const. meanings into words! Factor-analytic const. validity can't replace specifying how const. meaning relates to measurement, and iteratively bringing both into accord. That two researchers construe "attitude" differently is fine as long as it's explicit: "The best const. definition is one that everyone disagrees with." GJ introduced narrative response models and a neat tool: https://psycore.one/
PsyCoRe.one

Open Construct Definitions and instructions to support research and practice. In psychological science and its applications (e.g. the development of behavior change interventions), 'constructs' play a crucial role in explaining and changing human behavior and psychology.

From @HelenaPaterson's roundtable on inclusive assessment: I was impressed by the UK's strict rules on inclusivity in teaching and assessment! What I took away for myself is that I want to think hard about how different students may face different challenges in reaching the learning goal that I've defined for a course, and that I should try to provide different supports to set them on their individual paths there.

I was very excited about James Houghton's https://deliberation-lab.org/ project and the infrastructure they're building from his lightning talk and about his and Netta Weinstein's work with it in their workshop!

I learned about SEANCE for the first time from @danielmlow's talk! Very exciting because we've been using LIWC in our lab for a long while now, and having more power about the dictionary is :chefskiss:

Deliberation Lab

High-throughput experiments in small-group deliberation

@andreakispsy's lightning talk introduced me to the concept of epistemic sustainability, in the sense of reliability and longevity of our research results.
We had a very rich and multi-faceted discussion about ChatGPT et al., led by Laura and Matt Vowels. All the ideas, worries and excitement about how these models may help or hinder research and teaching should be moot, if we realise that the way these systems are built is deeply unethical (in the sense of e.g. exploitative labour practices) and unsafe (because of data security and privacy issues). @auzdavenice made this point very eloquently.
However, open source algorithms are already in a very exciting place. I'd love a hackathon at #SIPS2024 for working on a model that does things that we need (automated pre-registration? automated codebook? course materials?) better and more ethically than whatever the big corps are selling.
Finally I attended a great @FORRT hackathon, led by Alaa Aldoh, gathering info on (non-)replications and meta-analyses of effects from all subfields of psych. in order to create an easily readable and searchable overview on https://forrt.org/reversals/ I've had something like this in mind for ed. psych. for the longest time, so it's great people are doing this, and not just for one specific subfield. It is a _lot_ of work though, so get in touch with them if you want to help!
And, in the end, @rmrahal made a very succinct but multi-faceted argument why we need good working conditions if we want good science. There was one thought that I've had before but that came back into my mind during the talk: I don't think anyone expects that every person that ever does their Ph.D. expects a permanent position in academia afterwards, and many, I assume, don't even plan to stay in academia anyway.
But the problem is that - at least in the context that I know best and that Rima-Maria referred to the most, Germany - most psych. Ph.D. programs don't prepare you well for anything else or build good bridges out of academia. So, yes, we need permanent positions for post-docs but we also need to provide good training and opportunities. Psych Ph.D.s are highly qualified people and an asset to any context they may end up in and we should treat them that way, not just as "failed academics".
I had many more discussions and visited more sessions, so if I didn't mention your unconference, hackathon, workshop or lightning talk even if I was in it, rest assured I made notes (oh so many notes...) but I had to draw the line somewhere...
Oh, and I know that there was too much text on my poster (on a propsed study on academic teacher's perceptions of "AI" tools) and the type was too small and it wasn't really enticing to read, but luckily there's OSF for that and if you're interested you can now peruse it at your leisure: https://osf.io/nj27d
OSF

17/17 Okay, sorry for the wall of text, but that's on-brand for me... (Guess I should start a blog.)

This is just to show that #SIPS2023 gave me a lot to think about and inspired and motivated me. Yes, much of academia as a whole and psych. specifically is a mess, but there's so much great work going on that I'm coming away very hopeful. I especially enjoyed the immediate sense of community, the deep discussions and meeting many, many new people.

Looking forward to #SIPS2024!

Oh, and wow is Pride Village ever a great place!
@BBruPsyc thanks for this awesome thread! Also, you might want to check out https://quartodon.opens.science 🙂
Extract and Post Mastodon Threads From Text Files

To compose Mastodon posts, a plain text file can be used. Given the functionality offered by packages such as quarto, blogdown, and distill, it makes sense to compose toot threads as blog posts. The quartodon package allows you to extract the toots from such blog posts and then post them to Mastodon as a thread.

@matherion Haha, I love this! At this point they should just make R an OS. But yeah that would‘ve helped and I‘ll try it out!