Today @gvnordheim just asked #chatGPT3 to sketch a research design for our study. The result is more convincing than most initial student submissions would be... and a lot more useful than any Google search result. This is a game changer, this is no longer "one day, AI will allow us to....", but "as of today, AI can..."
So, dear fellow academics, I am curious: How do you think this will affect our own research work, but most importantly how we teach & evaluate students?

@kkvk7 @gvnordheim Well, I talked a bit to #gpt3 on Friday. My impression was that its answers (yet?) lack precision and intellectual clarity and are often contradictory. So I would not consider using it in an academic context (yet).

I also find the study description you have posted pretty generic and if you'd ask for more details on how to conduct the study you'd probably very quickly come to a point where the answers will not be helpful any more. But that may change soon, of course.

Yes, this is still rather generic and not (yet) suitable for scientifc publications etc. @kommueller @gvnordheim
But @Kudusch and I are teaching a methods class this term with an open-book exam asking students to propose a research design for a research question. Even though the answer lacks details, it would still receive more points than most student submissions in recent years because the design does fit the RQ. So I now need to re-think how to evaulate my students' answers...

@kkvk7 @kommueller @Kudusch

The prompt was generic, so of course the answer was too. Ask for a detailed code book, for the exact description of possible categories, for possible research questions and you will get very concrete, sometimes surprisingly original, almost always solid, almost completely plausible suggestions. We need to completely rethink teaching, homework, assessment. Exciting times ahead!

@gvnordheim @kkvk7 @Kudusch I totally agree that for student assessment this is a game changer. I asked #ChatGPT several questions from the exam I will conduct this week and the answers were indeed quite convincing.

However, if you dig deeper, they tend to become imprecise. For instance, I asked a few queries about political polarization. There, the model mixed up various subdimensions and argued that affective polarization and political polarization were two distinct, unrelated categories.

@gvnordheim @kkvk7 @Kudusch I also asked #ChatGPT whether it thinks it can be of help to students writing an exam. It denied. When I said I believe it can, it answered, that it is unable to know whether it can help students. When I then argued that these are contradictory claims, it denied again, arguing that these claims were independent from each other. That's what I mean with lack of logic.

Therefore, I'd say for research purposes, this is not a reliable tool, yet.

@kommueller @gvnordheim @kkvk7 Thinking of #ChatGPT and other tools in terms of logic, knowledge production and actual information is in my opinion the wrong way to go.

These tools are meant to (re)produce text. They are fed human generated text and are very good at reproducing it.

@kommueller @gvnordheim @kkvk7

Analogous to tools like #DALLE2 it can even reproduce and recombine elements and apply different styles (a renaissance style oil painting of basket full of blue oranges).

We would not say that the tool (or the users of the tool) claims that this painting exists or that oranges are blue.

@kommueller @gvnordheim @kkvk7

I think that the text production-part of student assignments will have to take a backseat. "Write a study design" will mean "edit/proof read/factcheck a study design made by an AI".

I have no idea how we want to test students in the future, but "write a (surface level) text on a given subject" is now a solved problem for computers.

@Kudusch @kommueller @gvnordheim @kkvk7 Agree. If those parts of science (both in teaching and research) that involve mostly writing pastiches of previous research can demonstrably be done/supported with a LLM, it should (a) lead to more self-awareness of what is actually meaningful scientific writing (precision, theoretical coherence, novelty) and (b) free up resources for empirical work or actual theorizing. (And it helps out researchers who just don't enjoy writing, yay.)