Generative AI, plagiarism, and “cheating”

Back in January, I wrote a post called Beyond Cheating, reflecting on the ChatGPT bans that were rolling out across various Australian states and the "cheating" narrative that had accompanied the chatbot since its release. In that earlier post, I argued that banning and blocking generative AI would only contribute to the digital divide - students who have greater access to digital technologies would inevitably be able to access and use GAI, putting those who rely on in-school technology […]

https://leonfurze.com/2023/09/20/generative-ai-plagiarism-and-cheating/

🇧🇷 vs 🇳🇱 in #ResearchEvaluation? A sharp comparative study shows how Brazil’s high-stakes, performance-based model contrasts with the Netherlands’ strategic, decentralized approach.

 https://doi.org/10.1093/reseval/rvaf013

Takeaway: Evaluation isn’t one-size-fits-all - context matters.

#ResearchAssessment #SciencePolicy #HigherEducation #ResponsibleMetrics #AcademicEvaluation

Bittersweet moment: a postdoc applicant got awarded a major postdoc fellowship. But took so long that the applicant accepted a position elsewhere in the meanwhile.

PhD students can't wait for many months, even over a year in this case, for funding to materialise: they also have bills to pay, and a life to live.

In an ideal world, funding would either resolve within 4 weeks of application, or be directly allocated to labs – the labs would apply, like a regular grant, rather than the applicants. Labs can absorb the delay, and anticipate the need, whereas applicants cannot. Then labs would assign the postdoc fellowships to whoever they thought could best deliver on the project, followed 2 years later by a rigorous review *of the lab*, which would increase/decrease the lab's ability to apply in the future. If this sounds bad, consider the alternative, which is the present system of independent postdoc fellowships with untenably long timelines to resolution.

#academia #AcademicEvaluation

Insightful talk by @tao at the Institut d'Etudis Catalans yesterday. While I did miss an opportunity to chat with him (many, many people wanted to chat with him, and got shy), I would like to pose a couple of questions / musings here to/at the Fediverse.

Firstly, I wonder if proof formalisation won't be the way forward for undergraduate and graduate mathematics grading 🤔 Instead of proof questions, which might become routine with large language models, one might have "project-directed" modules targeting learning about a topic by formalising results in the area, and exploring variants and generalisations of already formalised topics.

Secondly, formalisation may also be an opportunity for revising the way scientific output is evaluated. If we want to profit from a more systematic formalisation effort in mathematics, we need to look into how this would play into incentives we currently have. (Warning, misleading wording follows.) How can we measure someone's contribution? Would it be a good idea to create an "impact factor" for the proof modules authored by someone? Also, as we move to larger-scale collaborations, it may be important to de-emphasise, authoring as a pillar of scientific activity. (I have in mind, particulary, the fact that some people may end up with many important but small contributions to several projects, and that a non-negligible part of work lies probably in coordinating efforts or revising bubbles in the blueprint. Unfortunately, my experience with lean is nonexistent at the moment, but this should change in the near future, I hope!)

In any case, thank you @tao for an interesting talk, food for thought, and hope you enjoy your time visiting CRM!

#llm #lean #academicEvaluation