About 50 judges working in #Greece completed a novel Judicial Heuristics Assessment Questionnaire (J-HAQ), a 5-item reflection test, etc.

Judges' reported use of anchoring #heuristics correlated with their reflection test scores.

https://doi.org/10.3389/fcogn.2025.1421488

#law #xJur #psychology

Frontiers | Assessing judges' use and awareness of cognitive heuristic decision-making

Frontiers

Reflecting on our intuitions and principles until they are logically consistent is hard. Can #AI do it?

Ma et al. explicate #ReflectiveEquilibrium (RE) and test how #LLMs iteratively achieve RE on moral scenarios from the #ETHICS benchmark.

https://doi.org/10.1145/3722554

#xPhi #xJur

Do people prefer #prison sentence recommendations from humans or from #AI?

Large experiments on people in Japan (N > 3000) found "no preference for deferring to human ...or [to] AI judgments [on] sentencing decisions".

https://doi.org/10.1371/journal.pone.0318486

#law #xJur #xPhi #psychology

Judges versus artificial intelligence in juror decision-making in criminal trials: Evidence from two pre-registered experiments

Background Artificial intelligence (AI) is anticipated to play a significant role in criminal trials involving citizen jurors. Prior studies have suggested that AI is not widely preferred in ethical decision-making contexts, but little research has compared jurors’ reliance on judgments by human judges versus AI in such settings. Objectives This study examined whether jurors are more likely to defer to judgments by human judges or AI, especially in cases involving mitigating circumstances in which human-like reasoning may be valued. Methods Two pre-registered online experiments were conducted with Japanese participants (Experiment 1: N = 1,735, Mage = 48.4; Experiment 2: N = 1,731, Mage = 48.5). Participants reviewed two murder trial vignettes and made sentencing decisions (1 = suspended sentence; 8 = prison sentence) under two conditions: trials with and without mitigating circumstances. Results and conclusion Across both experiments, participants showed no preference for deferring to human judges’ or AI judgments when making sentencing decisions. While suspended sentences were more common in cases with mitigating circumstances, this tendency was unrelated to the judgment source. These findings suggest that jurors do not inherently avoid algorithmic judgments and may consider AI opinions on par with those of human judges in certain contexts. However, whether this leads to improved decision-making quality remains an open question, as objectivity (a strength of AI) and emotional considerations (a safeguard for fairness) may interact in complex ways during juror deliberations. Future research should further explore how these factors influence juror attitudes and decisions in diverse trial scenarios, taking into account potential biases in existing literature.

Judges' asylum decisions vary wildly, violating a "prerequisite for ...legal coherence—to 'treat like cases alike."

This paper shows how #AI and #algorithms can "instantiate a reasonable approximation of a coherence theory of #truth":

https://osf.io/4tjz5_v1

#law #xJur #CogSci

OSF

How does disgust impact judgments about criminality? Does legal expertise matter?

In an online study of over 1400 laypeople and legal professionals a “virtual child pornography vignette (characterized as low in harm, high in disgust) was criminalized more readily than the financial harm vignette (high in harm, low in disgust), and (b) disgust sensitivity was associated with the decision to criminalize”.

https://doi.org/10.1057/s41599-024-02842-8

#law #xPhi #xJur #moralPsychology #philosophy #expertise #edu

Another study finds honesty is more than just truthfulness.

Makes sense: you can mislead or even conceal with only true information (e.g., if you do not include *all* the true information).

The final study of this paper confirms: the “communicative elements of honesty [include not just] truthful speech [but] not concealing or misrepresenting”.

https://doi.org/10.1177/01461672231195355

#ethics #psychology #xPhi #experimentalPhilosophy #law #xJur #experimentalJurisprudence #moralPsych 

David Melnikoff presented "Bayesian and Wishful Thinking are Compatible" (a project with Nina Strohminger):

The finding: people felt better than predicted about the prospect of either prosecuting or defending a defendent, no matter which side they were incentivized to defend.

Preprint (pending revision for Nature Human Behavior): https://doi.org/10.31234/osf.io/yhmvw

#xJur #xPhi #law #rationality #cogSci #behavioralEconomics

I am sorry for (and amused by) my latest #typo: "...experimental philosophers (i.e., people who use the stools of cognitive science to study philosophical thinking)".

It's from an earlier draft of my brief answer to, "What is experimental jurisprudence?" https://www.quora.com/What-is-experimental-jurisprudence/answer/Nick-Byrd?ch=10&oid=1477743650165973&share=044ce01f&srid=hCDA&target_type=answer

#xPhi #CogSci #xJur #mistakesWereMade 

What is "experimental jurisprudence"?

Nick Byrd's answer: Experimental jurisprudence was started by experimental philosophers (i.e., people who use the tools of cognitive science to study philosophical thinking). Some seminal papers examined how bad side effects were deemed more intentional than good side effects [1] and how humanizi...

Quora